Excessive skewness which occurs sometimes in the data is represented as an obstacle against normal distribution. So, recent studies have witnessed activity in studying the skew-normal distribution (SND) that matches the skewness data which is regarded as a special case of the normal distribution with additional skewness parameter (α), which gives more flexibility to the normal distribution. When estimating the parameters of (SND), we face the problem of the non-linear equation and by using the method of Maximum Likelihood estimation (ML) their solutions will be inaccurate and unreliable. To solve this problem, two methods can be used that are: the genetic algorithm (GA) and the iterative reweighting algorithm (IR) based on the Maximum Likelihood method. Monte Carlo simulation was used with different skewness levels and sample sizes, and the superiority of the results was compared. It was concluded that (SND) model estimation using (GA) is the best when the samples sizes are small and medium, while large samples indicate that the (IR) algorithm is the best. The study was also done using real data to find the parameter estimation and a comparison between the superiority of the results based on (AIC, BIC, Mse and Def) criteria.
Estimation of the unknown parameters in 2-D sinusoidal signal model can be considered as important and difficult problem. Due to the difficulty to find estimate of all the parameters of this type of models at the same time, we propose sequential non-liner least squares method and sequential robust M method after their development through the use of sequential approach in the estimate suggested by Prasad et al to estimate unknown frequencies and amplitudes for the 2-D sinusoidal compounds but depending on Downhill Simplex Algorithm in solving non-linear equations for the purpose of obtaining non-linear parameters estimation which represents frequencies and then use of least squares formula to estimate
... Show MoreIn this paper, some estimators for the reliability function R(t) of Basic Gompertz (BG) distribution have been obtained, such as Maximum likelihood estimator, and Bayesian estimators under General Entropy loss function by assuming non-informative prior by using Jefferys prior and informative prior represented by Gamma and inverted Levy priors. Monte-Carlo simulation is conducted to compare the performance of all estimates of the R(t), based on integrated mean squared.
In this paper, a modified derivation has been introduced to analyze the construction of C-space. The profit from using C-space is to make the process of path planning more safety and easer. After getting the C-space construction and map for two-link planar robot arm, which include all the possible situations of collision between robot parts and obstacle(s), the A* algorithm, which is usually used to find a heuristic path on Cartesian W-space, has been used to find a heuristic path on C-space map. Several modifications are needed to apply the methodology for a manipulator with degrees of freedom more than two. The results of C-space map, which are derived by the modified analysis, prove the accuracy of the overall C-space mapping and cons
... Show MoreIn recent years, the attention of researchers has increased of semi-parametric regression models, because it is possible to integrate the parametric and non-parametric regression models in one and then form a regression model has the potential to deal with the cruse of dimensionality in non-parametric models that occurs through the increasing of explanatory variables. Involved in the analysis and then decreasing the accuracy of the estimation. As well as the privilege of this type of model with flexibility in the application field compared to the parametric models which comply with certain conditions such as knowledge of the distribution of errors or the parametric models may
... Show MoreThis paper is concerned with preliminary test double stage shrinkage estimators to estimate the variance (s2) of normal distribution when a prior estimate of the actual value (s2) is a available when the mean is unknown , using specifying shrinkage weight factors y(×) in addition to pre-test region (R).
Expressions for the Bias, Mean squared error [MSE (×)], Relative Efficiency [R.EFF (×)], Expected sample size [E(n/s2)] and percentage of overall sample saved of proposed estimator were derived. Numerical results (using MathCAD program) and conclusions are drawn about selection of different constants including in the me
... Show MoreA Genetic Algorithm optimization model is used in this study to find the optimum flow values of the Tigris river branches near Ammara city, which their water is to be used for central marshes restoration after mixing in Maissan River. These tributaries are Al-Areed, AlBittera and Al-Majar Al-Kabeer Rivers. The aim of this model is to enhance the water quality in Maissan River, hence provide acceptable water quality for marsh restoration. The model is applied for different water quality change scenarios ,i.e. , 10%,20% increase in EC,TDS and BOD. The model output are the optimum flow values for the three rivers while, the input data are monthly flows(1994-2011),monthly water requirements and water quality parameters (EC, TDS, BOD, DO and
... Show MoreCuring of concrete is the maintenance of a satisfactory moisture content and temperature for a
period of time immediately following placing so the desired properties are developed. Accelerated
curing is advantages where early strength gain in concrete is important. The expose of concrete
specimens to the accelerated curing conditions which permit the specimens to develop a significant
portion of their ultimate strength within a period of time (1-2 days), depends on the method of the
curing cycle.Three accelerated curing test methods are adopted in this study. These are warm water,
autogenous and proposed test methods. The results of this study has shown good correlation
between the accelerated strength especially for
In this paper, two of the local search algorithms are used (genetic algorithm and particle swarm optimization), in scheduling number of products (n jobs) on a single machine to minimize a multi-objective function which is denoted as (total completion time, total tardiness, total earliness and the total late work). A branch and bound (BAB) method is used for comparing the results for (n) jobs starting from (5-18). The results show that the two algorithms have found the optimal and near optimal solutions in an appropriate times.