Chemical pollution is a very important issue that people suffer from and it often affects the nature of health of society and the future of the health of future generations. Consequently, it must be considered in order to discover suitable models and find descriptions to predict the performance of it in the forthcoming years. Chemical pollution data in Iraq take a great scope and manifold sources and kinds, which brands it as Big Data that need to be studied using novel statistical methods. The research object on using Proposed Nonparametric Procedure NP Method to develop an (OCMT) test procedure to estimate parameters of linear regression model with large size of data (Big Data) which comprises many indicators associated with chemical pollution and profoundly have an effect on the life of the Iraqi people. The SICA estimator were chosen to analyze data and the MSE were used to make a comparison between the two methods and we determine that NP estimator is more effective than the other estimators under Big Data circumstances.
The problem of Multicollinearity is one of the most common problems, which deal to a large extent with the internal correlation between explanatory variables. This problem is especially Appear in economics and applied research, The problem of Multicollinearity has a negative effect on the regression model, such as oversized variance degree and estimation of parameters that are unstable when we use the Least Square Method ( OLS), Therefore, other methods were used to estimate the parameters of the negative binomial model, including the estimated Ridge Regression Method and the Liu type estimator, The negative binomial regression model is a nonline
... Show More
The logistic regression model is an important statistical model showing the relationship between the binary variable and the explanatory variables. The large number of explanations that are usually used to illustrate the response led to the emergence of the problem of linear multiplicity between the explanatory variables that make estimating the parameters of the model not accurate.
... Show MoreThis article aims to estimate the partially linear model by using two methods, which are the Wavelet and Kernel Smoothers. Simulation experiments are used to study the small sample behavior depending on different functions, sample sizes, and variances. Results explained that the wavelet smoother is the best depending on the mean average squares error criterion for all cases that used.
The study was conducted from November 2021 to May 2022 at the three study sites within the Baghdad governorate. The study aims to identify the impact of human activities on the Tigris River, so an area free of human activities was chosen and represented the first site. A total of 48 types were diagnosed, 6204 ind/m3 spread over three sites. The following environmental indicators were evaluated: Constancy Index (S), Relative abundance index (Ra), Richness Index (between 17.995 and 23.251), Shannon Weiner Index (0.48-1.25 bit/ind.), Uniformity Index (0.124 -0.323). The study showed that the highest percentage recorded was for the phylum Annileda 34%; and the stability index shows that taxes (Stylaria sp., Aoelosoma sp., Branchinra sowerby, Ch
... Show MoreThe Purpose of this study is mainly to improve the competitive position of products economic units using technique target cost and method reverse engineering and through the application of technique and style on one of the public sector companies (general company for vegetable oils) which are important in the detection of prices accepted in the market for items similar products and processing the problem of high cost which attract managerial and technical leadership to the weakness that need to be improved through the introduction of new innovative solutions which make appropriate change to satisfy the needs of consumers in a cheaper way to affect the decisions of private customer to buy , especially of purchase private economic units to
... Show Moremodel is derived, and the methodology is given in detail. The model is constructed depending on some measurement criteria, Akaike and Bayesian information criterion. For the new time series model, a new algorithm has been generated. The forecasting process, one and two steps ahead, is discussed in detail. Some exploratory data analysis is given in the beginning. The best model is selected based on some criteria; it is compared with some naïve models. The modified model is applied to a monthly chemical sales dataset (January 1992 to Dec 2019), where the dataset in this work has been downloaded from the United States of America census (www.census.gov). Ultimately, the forecasted sales