This research includes the study of dual data models with mixed random parameters, which contain two types of parameters, the first is random and the other is fixed. For the random parameter, it is obtained as a result of differences in the marginal tendencies of the cross sections, and for the fixed parameter, it is obtained as a result of differences in fixed limits, and random errors for each section. Accidental bearing the characteristic of heterogeneity of variance in addition to the presence of serial correlation of the first degree, and the main objective in this research is the use of efficient methods commensurate with the paired data in the case of small samples, and to achieve this goal, the feasible general least squares method (FGLS) and the mean group method (MG) were used, and then the efficiency of the extracted estimators was compared in the case of mixed random parameters and the method that gives us the efficient estimator was chosen. Real data was applied that included the per capita consumption of electric energy (Y) for five countries, which represents the number of cross-sections (N = 5) over nine years (T = 9), so the number of observations is (n = 45) observations, and the explanatory variables are the consumer price index (X1) and the per capita GDP (X2). To evaluate the performance of the estimators of the (FGLS) method and the (MG) method on the general model, the mean absolute percentage error (MAPE) scale was used to compare the efficiency of the estimators. The results showed that the mean group estimation (MG) method is the best method for parameter estimation than the (FGLS) method. Also, the (MG) appeared to be the best and best method for estimating sub-parameters for each cross-section (country).
A seemingly uncorrelated regression (SUR) model is a special case of multivariate models, in which the error terms in these equations are contemporaneously related. The method estimator (GLS) is efficient because it takes into account the covariance structure of errors, but it is also very sensitive to outliers. The robust SUR estimator can dealing outliers. We propose two robust methods for calculating the estimator, which are (S-Estimations, and FastSUR). We find that it significantly improved the quality of SUR model estimates. In addition, the results gave the FastSUR method superiority over the S method in dealing with outliers contained in the data set, as it has lower (MSE and RMSE) and higher (R-Squared and R-Square Adjus
... Show MoreThe problem of Multicollinearity is one of the most common problems, which deal to a large extent with the internal correlation between explanatory variables. This problem is especially Appear in economics and applied research, The problem of Multicollinearity has a negative effect on the regression model, such as oversized variance degree and estimation of parameters that are unstable when we use the Least Square Method ( OLS), Therefore, other methods were used to estimate the parameters of the negative binomial model, including the estimated Ridge Regression Method and the Liu type estimator, The negative binomial regression model is a nonline
... Show MorePoverty phenomenon is very substantial topic that determines the future of societies and governments and the way that they deals with education, health and economy. Sometimes poverty takes multidimensional trends through education and health. The research aims at studying multidimensional poverty in Iraq by using panelized regression methods, to analyze Big Data sets from demographical surveys collected by the Central Statistical Organization in Iraq. We choose classical penalized regression method represented by The Ridge Regression, Moreover; we choose another penalized method which is the Smooth Integration of Counting and Absolute Deviation (SICA) to analyze Big Data sets related to the different poverty forms in Iraq. Euclidian Distanc
... Show MoreThis research introduce a study with application on Principal Component Regression obtained from some of the explainatory variables to limitate Multicollinearity problem among these variables and gain staibilty in their estimations more than those which yield from Ordinary Least Squares. But the cost that we pay in the other hand losing a little power of the estimation of the predictive regression function in explaining the essential variations. A suggested numerical formula has been proposed and applied by the researchers as optimal solution, and vererifing the its efficiency by a program written by the researchers themselves for this porpuse through some creterions: Cumulative Percentage Variance, Coefficient of Determination, Variance
... Show MoreIn this research, we use fuzzy nonparametric methods based on some smoothing techniques, were applied to real data on the Iraqi stock market especially the data about Baghdad company for soft drinks for the year (2016) for the period (1/1/2016-31/12/2016) .A sample of (148) observations was obtained in order to construct a model of the relationship between the stock prices (Low, high, modal) and the traded value by comparing the results of the criterion (G.O.F.) for three techniques , we note that the lowest value for this criterion was for the K-Nearest Neighbor at Gaussian function .
There is an assumption implicit but fundamental theory behind the decline by the time series used in the estimate, namely that the time series has a sleep feature Stationary or the language of Engle Gernger chains are integrated level zero, which indicated by I (0). It is well known, for example, tables of t-statistic is designed primarily to deal with the results of the regression that uses static strings. This assumption has been previously treated as an axiom the mid-seventies, where researchers are conducting studies of applied without taking into account the properties of time series used prior to the assessment, was to accept the results of these tests Bmanueh and delivery capabilities based on the applicability of the theo
... Show MoreA condense study was done to compare between the ordinary estimators. In particular the maximum likelihood estimator and the robust estimator, to estimate the parameters of the mixed model of order one, namely ARMA(1,1) model.
Simulation study was done for a varieties the model. using: small, moderate and large sample sizes, were some new results were obtained. MAPE was used as a statistical criterion for comparison.
Regression models are one of the most important models used in modern studies, especially research and health studies because of the important results they achieve. Two regression models were used: Poisson Regression Model and Conway-Max Well- Poisson), where this study aimed to make a comparison between the two models and choose the best one between them using the simulation method and at different sample sizes (n = 25,50,100) and with repetitions (r = 1000). The Matlab program was adopted.) to conduct a simulation experiment, where the results showed the superiority of the Poisson model through the mean square error criterion (MSE) and also through the Akaiki criterion (AIC) for the same distribution.
Paper type:
... Show MoreIn this research, the methods of Kernel estimator (nonparametric density estimator) were relied upon in estimating the two-response logistic regression, where the comparison was used between the method of Nadaraya-Watson and the method of Local Scoring algorithm, and optimal Smoothing parameter λ was estimated by the methods of Cross-validation and generalized Cross-validation, bandwidth optimal λ has a clear effect in the estimation process. It also has a key role in smoothing the curve as it approaches the real curve, and the goal of using the Kernel estimator is to modify the observations so that we can obtain estimators with characteristics close to the properties of real parameters, and based on medical data for patients with chro
... Show MoreAbstract
The grey system model GM(1,1) is the model of the prediction of the time series and the basis of the grey theory. This research presents the methods for estimating parameters of the grey model GM(1,1) is the accumulative method (ACC), the exponential method (EXP), modified exponential method (Mod EXP) and the Particle Swarm Optimization method (PSO). These methods were compared based on the Mean square error (MSE) and the Mean Absolute percentage error (MAPE) as a basis comparator and the simulation method was adopted for the best of the four methods, The best method was obtained and then applied to real data. This data represents the consumption rate of two types of oils a he
... Show More