This paper deals to how to estimate points non measured spatial data when the number of its terms (sample spatial) a few, that are not preferred for the estimation process, because we also know that whenever if the data is large, the estimation results of the points non measured to be better and thus the variance estimate less, so the idea of this paper is how to take advantage of the data other secondary (auxiliary), which have a strong correlation with the primary data (basic) to be estimated single points of non-measured, as well as measuring the variance estimate, has been the use of technique Co-kriging in this field to build predictions spatial estimation process, and then we applied this idea to real data in the cultivation of wheat crop in Iraq, where he was be considered the amount of production is the basic data (variable primary) and want to estimate a single points of non measured and the cultivated area (variable secondary) has been programming all calculations language Matlab
The current study aimed to determine the relation between the lead levels in the blood traffic men and the nature of their traffic work in Baghdad governorate. Blood samples were collected from 10 traffic men and the age about from 20-39 year from Directorate of Traffic Al Rusafa/ Baghdad and 10 samples another control from traffic men too with age 30-49 year and they livedrelatively in the clear cities or contained of Very few traffic areas. The levels of lead in blood estimated by used Atomic Absorption Spectrometry.
The result stated that there is no rising of the levels of lead in blood of traffic men Lead concentration was with more a range from 14 ppm in Traffic police are not healthy They are within the permissible limits, Ap
In this study, we review the ARIMA (p, d, q), the EWMA and the DLM (dynamic linear moodelling) procedures in brief in order to accomdate the ac(autocorrelation) structure of data .We consider the recursive estimation and prediction algorithms based on Bayes and KF (Kalman filtering) techniques for correlated observations.We investigate the effect on the MSE of these procedures and compare them using generated data.
The study included examination of three types of different origin and orange juice at the rate of recurring per sample, the results showed that the highest rates of acid (pH) in the A and juice were (4). And salts of calcium is 120 ppm in juice C and 86 ppm of magnesium in the juice B, for heavy metals the highest rate of lead .18 recorded ppm in juice B, 1.32 ppm of copper in juice A, 5 ppm of iron in the juice B, 1.3 ppm of zinc in the juice B, 0.05 ppm of aluminum in each of the sappy B and A, 0.02 ppm of cobalt in the juice B, 0.3 ppm of nickel in the juice B, 170.6 ppm sodium in C juice, but for the acids, organic that the highest rates were 3.2 part Millions of acid in the juice owner a, 260 ppm of the acid in the juice the ascorbi
... Show MoreThrough recent years many researchers have developed methods to estimate the self-similarity and long memory parameter that is best known as the Hurst parameter. In this paper, we set a comparison between nine different methods. Most of them use the deviations slope to find an estimate for the Hurst parameter like Rescaled range (R/S), Aggregate Variance (AV), and Absolute moments (AM), and some depend on filtration technique like Discrete Variations (DV), Variance versus level using wavelets (VVL) and Second-order discrete derivative using wavelets (SODDW) were the comparison set by a simulation study to find the most efficient method through MASE. The results of simulation experiments were shown that the performance of the meth
... Show MoreBusiness organizations have faced many challenges in recent times, most important of which is information technology, because it is widely spread and easy to use. Its use has led to an increase in the amount of data that business organizations deal with an unprecedented manner. The amount of data available through the internet is a problem that many parties seek to find solutions for. Why is it available there in this huge amount randomly? Many expectations have revealed that in 2017, there will be devices connected to the internet estimated at three times the population of the Earth, and in 2015 more than one and a half billion gigabytes of data was transferred every minute globally. Thus, the so-called data mining emerged as a
... Show MoreVariable selection is an essential and necessary task in the statistical modeling field. Several studies have triedto develop and standardize the process of variable selection, but it isdifficultto do so. The first question a researcher needs to ask himself/herself what are the most significant variables that should be used to describe a given dataset’s response. In thispaper, a new method for variable selection using Gibbs sampler techniqueshas beendeveloped.First, the model is defined, and the posterior distributions for all the parameters are derived.The new variable selection methodis tested usingfour simulation datasets. The new approachiscompared with some existingtechniques: Ordinary Least Squared (OLS), Least Absolute Shrinkage
... Show MoreThis paper deals with constructing a model of fuzzy linear programming with application on fuels product of Dura- refinery , which consist of seven products that have direct effect ondaily consumption . After Building the model which consist of objective function represents the selling prices ofthe products and fuzzy productions constraints and fuzzy demand constraints addition to production requirements constraints , we used program of ( WIN QSB ) to find the optimal solution
This Book is the second edition that intended to be textbook studied for undergraduate/ postgraduate course in mathematical statistics. In order to achieve the goals of the book, it is divided into the following chapters. Chapter One introduces events and probability review. Chapter Two devotes to random variables in their two types: discrete and continuous with definitions of probability mass function, probability density function and cumulative distribution function as well. Chapter Three discusses mathematical expectation with its special types such as: moments, moment generating function and other related topics. Chapter Four deals with some special discrete distributions: (Discrete Uniform, Bernoulli, Binomial, Poisson, Geometric, Neg
... Show MoreAbstract
The grey system model GM(1,1) is the model of the prediction of the time series and the basis of the grey theory. This research presents the methods for estimating parameters of the grey model GM(1,1) is the accumulative method (ACC), the exponential method (EXP), modified exponential method (Mod EXP) and the Particle Swarm Optimization method (PSO). These methods were compared based on the Mean square error (MSE) and the Mean Absolute percentage error (MAPE) as a basis comparator and the simulation method was adopted for the best of the four methods, The best method was obtained and then applied to real data. This data represents the consumption rate of two types of oils a he
... Show MoreThe aim of this research is to explore the time and space distribution of traffic volume demand and investigate its vehicle compositions. The four selected links presented the activity of transportation facilities and different congestion points according to directions. The study area belongs to Al-Rusafa sector in Baghdad city that exhibited higher rate of traffic congestions of working days at peak morning and evening periods due to the different mixed land uses. The obtained results showed that Link (1) from Medical city intersection to Sarafiya intersection, demonstrated the highest traffic volume in both peak time periods morning AM and afternoon PM where the demand exceeds the capacity along the link corridor. Also, higher values f
... Show More