Grey system theory is a multidisciplinary scientific approach, which deals with systems that have partially unknown information (small sample and uncertain information). Grey modeling as an important component of such theory gives successful results with limited amount of data. Grey Models are divided into two types; univariate and multivariate grey models. The univariate grey model with one order derivative equation GM (1,1) is the base stone of the theory, it is considered the time series prediction model but it doesn’t take the relative factors in account. The traditional multivariate grey models GM(1,M) takes those factor in account but it has a complex structure and some defects in " modeling mechanism", "parameter estimation "and "model structure", So that traditional GM(1,M) submitted to many trials of optimizations to getting rid this defects. This research shows the characteristics of the traditional GM(1,M), the problems it suffer from, the method of getting rid of such problems and presents two optimized multivariable grey model of one order derivative equation. the first one is called the Optimized Grey Model abbreviated as OGM(1, M) by adding the linear correction term h1(M-1)and the grey action quantity term (h2) to the traditional model GM(1,M) the latter is called Optimized Background value Grey Model OBGM(1,M) by optimizing the Background value of the last model OGM(1,M). We use two A realistic data represents the water consumption in Baghdad at the period (2016-2022) to compare the two optimized models with the traditional represents the water consumption in Baghdad at the period (2016-2022)). we use the mean absolute percentage error (MAPE) and the determination coefficient R2. To compare the two optimized model with traditional one. The results show that the two optimized have less values than the those of the traditional model GM(I,M), and that verify the correctness of defects analysis of GM(1,M).
Researchers have increased interest in recent years in determining the optimum sample size to obtain sufficient accuracy and estimation and to obtain high-precision parameters in order to evaluate a large number of tests in the field of diagnosis at the same time. In this research, two methods were used to determine the optimum sample size to estimate the parameters of high-dimensional data. These methods are the Bennett inequality method and the regression method. The nonlinear logistic regression model is estimated by the size of each sampling method in high-dimensional data using artificial intelligence, which is the method of artificial neural network (ANN) as it gives a high-precision estimate commensurate with the dat
... Show MoreThe Log-Logistic distribution is one of the important statistical distributions as it can be applied in many fields and biological experiments and other experiments, and its importance comes from the importance of determining the survival function of those experiments. The research will be summarized in making a comparison between the method of maximum likelihood and the method of least squares and the method of weighted least squares to estimate the parameters and survival function of the log-logistic distribution using the comparison criteria MSE, MAPE, IMSE, and this research was applied to real data for breast cancer patients. The results showed that the method of Maximum likelihood best in the case of estimating the paramete
... Show MoreIn this research , we study the inverse Gompertz distribution (IG) and estimate the survival function of the distribution , and the survival function was evaluated using three methods (the Maximum likelihood, least squares, and percentiles estimators) and choosing the best method estimation ,as it was found that the best method for estimating the survival function is the squares-least method because it has the lowest IMSE and for all sample sizes
Nitrogen (N) is a key growth and yield-limiting factor in cultivated rice areas. This study has been conducted to evaluate the effects of different conditions of N application on rice yield and yield components (Shiroudi cultivar) in Babol (Mazandaran, Iran) during the 2015- 2016 season. A factorial experiment executed of a Randomized Complete Block Design (RCBD) used in three iterations. In the first factor, treatments were four N amounts (including 50, 90, 130, and 170 kg N ha-1), while in the second factor, the treatments consisted of four different fertilizer splitting methods, including T1:70 % at the basal stage + 30 % at the maximum tillering stage, T2:1/3 at the basal stage + 1/3 at the maximum ti
... Show MoreThe acceptance sampling plans for generalized exponential distribution, when life time experiment is truncated at a pre-determined time are provided in this article. The two parameters (α, λ), (Scale parameters and Shape parameters) are estimated by LSE, WLSE and the Best Estimator’s for various samples sizes are used to find the ratio of true mean time to a pre-determined, and are used to find the smallest possible sample size required to ensure the producer’s risks, with a pre-fixed probability (1 - P*). The result of estimations and of sampling plans is provided in tables.
Key words: Generalized Exponential Distribution, Acceptance Sampling Plan, and Consumer’s and Producer Risks
... Show More<span>Dust is a common cause of health risks and also a cause of climate change, one of the most threatening problems to humans. In the recent decade, climate change in Iraq, typified by increased droughts and deserts, has generated numerous environmental issues. This study forecasts dust in five central Iraqi districts using machine learning and five regression algorithm supervised learning system framework. It was assessed using an Iraqi meteorological organization and seismology (IMOS) dataset. Simulation results show that the gradient boosting regressor (GBR) has a mean square error of 8.345 and a total accuracy ratio of 91.65%. Moreover, the results show that the decision tree (DT), where the mean square error is 8.965, c
... Show MoreThis paper presents a grey model GM(1,1) of the first rank and a variable one and is the basis of the grey system theory , This research dealt properties of grey model and a set of methods to estimate parameters of the grey model GM(1,1) is the least square Method (LS) , weighted least square method (WLS), total least square method (TLS) and gradient descent method (DS). These methods were compared based on two types of standards: Mean square error (MSE), mean absolute percentage error (MAPE), and after comparison using simulation the best method was applied to real data represented by the rate of consumption of the two types of oils a Heavy fuel (HFO) and diesel fuel (D.O) and has been applied several tests to
... Show MoreInfrastructure, especially wastewater projects, plays an important role in the life of residential communities. Due to the increasing population growth, there is also a significant increase in residential and commercial facilities. This research aims to develop two models for predicting the cost and time of wastewater projects according to independent variables affecting them. These variables have been determined through a questionnaire distributed to 20 projects under construction in Al-Kut City/ Wasit Governorate/Iraq. The researcher used artificial neural network technology to develop the models. The results showed that the coefficient of correlation R between actual and predicted values were 99.4% and 99 %, MAPE was
... Show MoreIn this paper ,the problem of point estimation for the two parameters of logistic distribution has been investigated using simulation technique. The rank sampling set estimator method which is one of the Non_Baysian procedure and Lindley approximation estimator method which is one of the Baysian method were used to estimate the parameters of logistic distribution. Comparing between these two mentioned methods by employing mean square error measure and mean absolute percentage error measure .At last simulation technique used to generate many number of samples sizes to compare between these methods.
In this paper, we will study non parametric model when the response variable have missing data (non response) in observations it under missing mechanisms MCAR, then we suggest Kernel-Based Non-Parametric Single-Imputation instead of missing value and compare it with Nearest Neighbor Imputation by using the simulation about some difference models and with difference cases as the sample size, variance and rate of missing data.