This research includes the study of dual data models with mixed random parameters, which contain two types of parameters, the first is random and the other is fixed. For the random parameter, it is obtained as a result of differences in the marginal tendencies of the cross sections, and for the fixed parameter, it is obtained as a result of differences in fixed limits, and random errors for each section. Accidental bearing the characteristic of heterogeneity of variance in addition to the presence of serial correlation of the first degree, and the main objective in this research is the use of efficient methods commensurate with the paired data in the case of small samples, and to achieve this goal, the feasible general least squares method (FGLS) and the mean group method (MG) were used, and then the efficiency of the extracted estimators was compared in the case of mixed random parameters and the method that gives us the efficient estimator was chosen. Real data was applied that included the per capita consumption of electric energy (Y) for five countries, which represents the number of cross-sections (N = 5) over nine years (T = 9), so the number of observations is (n = 45) observations, and the explanatory variables are the consumer price index (X1) and the per capita GDP (X2). To evaluate the performance of the estimators of the (FGLS) method and the (MG) method on the general model, the mean absolute percentage error (MAPE) scale was used to compare the efficiency of the estimators. The results showed that the mean group estimation (MG) method is the best method for parameter estimation than the (FGLS) method. Also, the (MG) appeared to be the best and best method for estimating sub-parameters for each cross-section (country).
The analysis of the classic principal components are sensitive to the outliers where they are calculated from the characteristic values and characteristic vectors of correlation matrix or variance Non-Robust, which yields an incorrect results in the case of these data contains the outliers values. In order to treat this problem, we resort to use the robust methods where there are many robust methods Will be touched to some of them.
The robust measurement estimators include the measurement of direct robust estimators for characteristic values by using characteristic vectors without relying on robust estimators for the variance and covariance matrices. Also the analysis of the princ
... Show MoreThis paper describes a number of new interleaving strategies based on the golden section. The new interleavers are called golden relative prime interleavers, golden interleavers, and dithered golden interleavers. The latter two approaches involve sorting a real-valued vector derived from the golden section. Random and so-called “spread” interleavers are also considered. Turbo-code performance results are presented and compared for the various interleaving strategies. Of the interleavers considered, the dithered golden interleaver typically provides the best performance, especially for low code rates and large block sizes. The golden relative prime interleaver is shown to work surprisingly well for high puncture rates. These interleav
... Show MoreWe have investigated in this research, the contents of the electronic cigarette (Viber) and the emergence of the phenomenon of electronic smoking (vibing) were discussed, although the topic of smoking is one of the oldest topics on which many articles and research have been conducted, electronic smoking has not been studied according to statistical scientific research, we tried in this research to identify the concept of electronic smoking to sample the studied data and to deal with it in a scientific way. This research included conducting a statistical analysis using the factor analysis of a sample taken randomly from some colleges in Bab Al-medium in Baghdad with a size of (70) views where (КМО) and a (bartlett) tests
... Show More
We have presented the distribution of the exponentiated expanded power function (EEPF) with four parameters, where this distribution was created by the exponentiated expanded method created by the scientist Gupta to expand the exponential distribution by adding a new shape parameter to the cumulative function of the distribution, resulting in a new distribution, and this method is characterized by obtaining a distribution that belongs for the exponential family. We also obtained a function of survival rate and failure rate for this distribution, where some mathematical properties were derived, then we used the method of maximum likelihood (ML) and method least squares developed (LSD)
... Show MoreVolterra – Fredholm integral equations (VFIEs) have a massive interest from researchers recently. The current study suggests a collocation method for the mixed Volterra - Fredholm integral equations (MVFIEs)."A point interpolation collocation method is considered by combining the radial and polynomial basis functions using collocation points". The main purpose of the radial and polynomial basis functions is to overcome the singularity that could associate with the collocation methods. The obtained interpolation function passes through all Scattered Point in a domain and therefore, the Delta function property is the shape of the functions. The exact solution of selective solutions was compared with the results obtained
... Show MoreThe Dagum Regression Model, introduced to address limitations in traditional econometric models, provides enhanced flexibility for analyzing data characterized by heavy tails and asymmetry, which is common in income and wealth distributions. This paper develops and applies the Dagum model, demonstrating its advantages over other distributions such as the Log-Normal and Gamma distributions. The model's parameters are estimated using Maximum Likelihood Estimation (MLE) and the Method of Moments (MoM). A simulation study evaluates both methods' performance across various sample sizes, showing that MoM tends to offer more robust and precise estimates, particularly in small samples. These findings provide valuable insights into the ana
... Show MoreAbstract :
The study aims at building a mathematical model for the aggregate production planning for Baghdad soft drinks company. The study is based on a set of aggregate planning strategies (Control of working hours, storage level control strategy) for the purpose of exploiting the resources and productive capacities available in an optimal manner and minimizing production costs by using (Matlab) program. The most important finding of the research is the importance of exploiting during the available time of production capacity. In the months when the demand is less than the production capacity available for investment. In the subsequent months when the demand exceeds the available energy and to minimize the use of overti
... Show MoreThe research took the spatial autoregressive model: SAR and spatial error model: SEM in an attempt to provide practical evidence that proves the importance of spatial analysis, with a particular focus on the importance of using regression models spatial and that includes all of the spatial dependence, which we can test its presence or not by using Moran test. While ignoring this dependency may lead to the loss of important information about the phenomenon under research is reflected in the end on the strength of the statistical estimation power, as these models are the link between the usual regression models with time-series models. The spatial analysis had been applied to Iraq Household Socio-Economic Survey: IHS
... Show MoreThis research aims to analyze and simulate biochemical real test data for uncovering the relationships among the tests, and how each of them impacts others. The data were acquired from Iraqi private biochemical laboratory. However, these data have many dimensions with a high rate of null values, and big patient numbers. Then, several experiments have been applied on these data beginning with unsupervised techniques such as hierarchical clustering, and k-means, but the results were not clear. Then the preprocessing step performed, to make the dataset analyzable by supervised techniques such as Linear Discriminant Analysis (LDA), Classification And Regression Tree (CART), Logistic Regression (LR), K-Nearest Neighbor (K-NN), Naïve Bays (NB
... Show More