<span>Dust is a common cause of health risks and also a cause of climate change, one of the most threatening problems to humans. In the recent decade, climate change in Iraq, typified by increased droughts and deserts, has generated numerous environmental issues. This study forecasts dust in five central Iraqi districts using machine learning and five regression algorithm supervised learning system framework. It was assessed using an Iraqi meteorological organization and seismology (IMOS) dataset. Simulation results show that the gradient boosting regressor (GBR) has a mean square error of 8.345 and a total accuracy ratio of 91.65%. Moreover, the results show that the decision tree (DT), where the mean square error is 8.965, comes in second place with a gross ratio of 91%. Furthermore, Bayesian ridge (BR), linear regressor (LR), and stochastic gradient descent (SGD), with mean square error and with accuracy ratios of 84.365%, 84.363%, and 79%. As a result, the performance precision of these regression models yields. The interaction framework was designed to be a straightforward tool for working with this paradigm. This model is a valuable tool for establishing strategies to counter the swiftness of climate change in the area under study.</span>
Abstract
The research aims to identify the level of effectiveness of the teaching practices of science and mathematics teachers in light of the national framework for future skills in Omani schools. To achieve the objectives of the study, the researchers used the descriptive approach, as he designed a note card consisting of (30) phrases distributed on three axes: basic skills, practical skills, and technical skills. After verifying the validity and reliability of the tools, they were applied to a sample of (116) teachers. The results of the research revealed that the level of effectiveness of the teaching practices of mathematics teachers has recorded a medium degree with a mean (3.05). The results a
... Show MoreIn this paper we estimate the coefficients and scale parameter in linear regression model depending on the residuals are of type 1 of extreme value distribution for the largest values . This can be regard as an improvement for the studies with the smallest values . We study two estimation methods ( OLS & MLE ) where we resort to Newton – Raphson (NR) and Fisher Scoring methods to get MLE estimate because the difficulty of using the usual approach with MLE . The relative efficiency criterion is considered beside to the statistical inference procedures for the extreme value regression model of type 1 for largest values . Confidence interval , hypothesis testing for both scale parameter and regression coefficients
... Show MoreAbstract
Binary logistic regression model used in data classification and it is the strongest most flexible tool in study cases variable response binary when compared to linear regression. In this research, some classic methods were used to estimate parameters binary logistic regression model, included the maximum likelihood method, minimum chi-square method, weighted least squares, with bayes estimation , to choose the best method of estimation by default values to estimate parameters according two different models of general linear regression models ,and different s
... Show MoreThe financial markets are one of the sectors whose data is characterized by continuous movement in most of the times and it is constantly changing, so it is difficult to predict its trends , and this leads to the need of methods , means and techniques for making decisions, and that pushes investors and analysts in the financial markets to use various and different methods in order to reach at predicting the movement of the direction of the financial markets. In order to reach the goal of making decisions in different investments, where the algorithm of the support vector machine and the CART regression tree algorithm are used to classify the stock data in order to determine
... Show More
Abstract
Due to the lack of previous statistical study of the behavior of payments, specifically health insurance, which represents the largest proportion of payments in the general insurance companies in Iraq, this study was selected and applied in the Iraqi insurance company.
In order to find the convenient model representing the health insurance payments, we initially detected two probability models by using (Easy Fit) software:
First, a single Lognormal for the whole sample and the other is a Compound Weibull for the two Sub samples (small payments and large payments), and we focused on the compoun
... Show MoreAbstract
The Non - Homogeneous Poisson process is considered as one of the statistical subjects which had an importance in other sciences and a large application in different areas as waiting raws and rectifiable systems method , computer and communication systems and the theory of reliability and many other, also it used in modeling the phenomenon that occurred by unfixed way over time (all events that changed by time).
This research deals with some of the basic concepts that are related to the Non - Homogeneous Poisson process , This research carried out two models of the Non - Homogeneous Poisson process which are the power law model , and Musa –okumto , to estimate th
... Show MoreMassive multiple-input multiple-output (massive-MIMO) is considered as the key technology to meet the huge demands of data rates in the future wireless communications networks. However, for massive-MIMO systems to realize their maximum potential gain, sufficiently accurate downlink (DL) channel state information (CSI) with low overhead to meet the short coherence time (CT) is required. Therefore, this article aims to overcome the technical challenge of DL CSI estimation in a frequency-division-duplex (FDD) massive-MIMO with short CT considering five different physical correlation models. To this end, the statistical structure of the massive-MIMO channel, which is captured by the physical correlation is exploited to find sufficiently
... Show MoreThis research sought to present a concept of cross-sectional data models, A crucial double data to take the impact of the change in time and obtained from the measured phenomenon of repeated observations in different time periods, Where the models of the panel data were defined by different types of fixed , random and mixed, and Comparing them by studying and analyzing the mathematical relationship between the influence of time with a set of basic variables Which are the main axes on which the research is based and is represented by the monthly revenue of the working individual and the profits it generates, which represents the variable response And its relationship to a set of explanatory variables represented by the
... Show More