Chemical pollution is a very important issue that people suffer from and it often affects the nature of health of society and the future of the health of future generations. Consequently, it must be considered in order to discover suitable models and find descriptions to predict the performance of it in the forthcoming years. Chemical pollution data in Iraq take a great scope and manifold sources and kinds, which brands it as Big Data that need to be studied using novel statistical methods. The research object on using Proposed Nonparametric Procedure NP Method to develop an (OCMT) test procedure to estimate parameters of linear regression model with large size of data (Big Data) which comprises many indicators associated with chemical pollution and profoundly have an effect on the life of the Iraqi people. The SICA estimator were chosen to analyze data and the MSE were used to make a comparison between the two methods and we determine that NP estimator is more effective than the other estimators under Big Data circumstances.
The logistic regression model is an important statistical model showing the relationship between the binary variable and the explanatory variables. The large number of explanations that are usually used to illustrate the response led to the emergence of the problem of linear multiplicity between the explanatory variables that make estimating the parameters of the model not accurate.
... Show MoreThis article aims to estimate the partially linear model by using two methods, which are the Wavelet and Kernel Smoothers. Simulation experiments are used to study the small sample behavior depending on different functions, sample sizes, and variances. Results explained that the wavelet smoother is the best depending on the mean average squares error criterion for all cases that used.
Artificial Intelligence Algorithms have been used in recent years in many scientific fields. We suggest employing artificial TABU algorithm to find the best estimate of the semi-parametric regression function with measurement errors in the explanatory variables and the dependent variable, where measurement errors appear frequently in fields such as sport, chemistry, biological sciences, medicine, and epidemiological studies, rather than an exact measurement.
Recently Tobit Quantile Regression(TQR) has emerged as an important tool in statistical analysis . in order to improve the parameter estimation in (TQR) we proposed Bayesian hierarchical model with double adaptive elastic net technique and Bayesian hierarchical model with adaptive ridge regression technique .
in double adaptive elastic net technique we assume different penalization parameters for penalization different regression coefficients in both parameters λ1and λ2 , also in adaptive ridge regression technique we assume different penalization parameters for penalization different regression coefficients i
... Show More
The study was conducted from November 2021 to May 2022 at the three study sites within the Baghdad governorate. The study aims to identify the impact of human activities on the Tigris River, so an area free of human activities was chosen and represented the first site. A total of 48 types were diagnosed, 6204 ind/m3 spread over three sites. The following environmental indicators were evaluated: Constancy Index (S), Relative abundance index (Ra), Richness Index (between 17.995 and 23.251), Shannon Weiner Index (0.48-1.25 bit/ind.), Uniformity Index (0.124 -0.323). The study showed that the highest percentage recorded was for the phylum Annileda 34%; and the stability index shows that taxes (Stylaria sp., Aoelosoma sp., Branchinra sowerby, Ch
... Show MoreThe logistic regression model is one of the oldest and most common of the regression models, and it is known as one of the statistical methods used to describe and estimate the relationship between a dependent random variable and explanatory random variables. Several methods are used to estimate this model, including the bootstrap method, which is one of the estimation methods that depend on the principle of sampling with return, and is represented by a sample reshaping that includes (n) of the elements drawn by randomly returning from (N) from the original data, It is a computational method used to determine the measure of accuracy to estimate the statistics, and for this reason, this method was used to find more accurate estimates. The ma
... Show More