Estimating multivariate location and scatter with both affine equivariance and positive break down has always been difficult. Awell-known estimator which satisfies both properties is the Minimum volume Ellipsoid Estimator (MVE) Computing the exact (MVE) is often not feasible, so one usually resorts to an approximate Algorithm. In the regression setup, algorithm for positive-break down estimators like Least Median of squares typically recomputed the intercept at each step, to improve the result. This approach is called intercept adjustment. In this paper we show that a similar technique, called location adjustment, Can be applied to the (MVE). For this purpose we use the Minimum Volume Ball (MVB). In order to lower the (MVE) objective function. An exact algorithm for calculating the (MVB) is presented. As an alternative to (MVB) location adjustment we propose () location adjustment, which does not necessarily lower the (MVE) objective function but yields more efficient estimates for the location part. Simulations Compare the two type of location adjustment.
In this paper, the proposed phase fitted and amplification fitted of the Runge-Kutta-Fehlberg method were derived on the basis of existing method of 4(5) order to solve ordinary differential equations with oscillatory solutions. The recent method has null phase-lag and zero dissipation properties. The phase-lag or dispersion error is the angle between the real solution and the approximate solution. While the dissipation is the distance of the numerical solution from the basic periodic solution. Many of problems are tested over a long interval, and the numerical results have shown that the present method is more precise than the 4(5) Runge-Kutta-Fehlberg method.
In this research, the Iraqi flagpole at Baghdad University, which is the longest in Baghdad, with a height of 75m, was monitored. According to the importance of this structure, the calculation of the displacement (vertical deviation) in the structure was monitored using the Total Station device, where several observations were taken at different times for two years the monitoring started from November 2016 until May 2017, at a rate of four observations for one year. The observation was processed using the least square method, and the fitting of circles, and then the data was processed. The deviation was calculated using the Matlab program to calculate the values of corrections, where
In the present work, we use the Adomian Decomposition method to find the approximate solution for some cases of the Newell whitehead segel nonlinear differential equation which was solved previously with exact solution by the Homotopy perturbation and the Iteration methods, then we compared the results.
In this paper, a subspace identification method for bilinear systems is used . Wherein a " three-block " and " four-block " subspace algorithms are used. In this algorithms the input signal to the system does not have to be white . Simulation of these algorithms shows that the " four-block " gives fast convergence and the dimensions of the matrices involved are significantly smaller so that the computational complexity is lower as a comparison with " three-block " algorithm .
Interval methods for verified integration of initial value problems (IVPs) for ODEs have been used for more than 40 years. For many classes of IVPs, these methods have the ability to compute guaranteed error bounds for the flow of an ODE, where traditional methods provide only approximations to a solution. Overestimation, however, is a potential drawback of verified methods. For some problems, the computed error bounds become overly pessimistic, or integration even breaks down. The dependency problem and the wrapping effect are particular sources of overestimations in interval computations. Berz (see [1]) and his co-workers have developed Taylor model methods, which extend interval arithmetic with symbolic computations. The latter is an ef
... Show MoreA coin has two sides. Steganography although conceals the existence of a message but is not completely secure. It is not meant to supersede cryptography but to supplement it. The main goal of this method is to minimize the number of LSBs that are changed when substituting them with the bits of characters in the secret message. This will lead to decrease the distortion (noise) that is occurred in the pixels of the stego-image and as a result increase the immunity of the stego-image against the visual attack. The experiment shows that the proposed method gives good enhancement to the steganoraphy technique and there is no difference between the cover-image and the stego-image that can be seen by the human vision system (HVS), so this method c
... Show MoreThe factorial analysis method consider a advanced statistical way concern in different ways like physical education field and the purpose to analyze the results that we want to test it or measure or for knowing the dimensions of some correlations between common variables that formed the phenomenon in less number of factors that effect on explanation , so we must depend use the self consistent that achieved for reaching that basic request. The goal of this search that depending on techntion of self consistent degree guessing for choosing perfect way from different methods for (orthogonal & oblique) kinds in physical education factor studies and we select some of references for ( master & doctoral) and also the scientific magazine and confere
... Show MoreAbstract
The basic orientation of the research is an attempting to apply the cost determining method according in the contract sector projects for Al- Iraq ceneral company, that this subject has a big value according to its modernity and its influence on the future and the eaning of the company.
The research aims to find out the effect of the method for determining cost based on the activity in determining the cost of the construction sector projects.The research was conducted in Iraq General Company for the implementation of irrigation projects. Bani search on three assumptions, the first is (that the application of the method for determining the cost on the basis of
... Show MoreThe Estimation Of The Reliability Function Depends On The Accuracy Of The Data Used To Estimate The Parameters Of The Probability distribution, and Because Some Data Suffer from a Skew in their Data to Estimate the Parameters and Calculate the Reliability Function in light of the Presence of Some Skew in the Data, there must be a Distribution that has flexibility in dealing with that Data. As in the data of Diyala Company for Electrical Industries, as it was observed that there was a positive twisting in the data collected from the Power and Machinery Department, which required distribution that deals with those data and searches for methods that accommodate this problem and lead to accurate estimates of the reliability function,
... Show More