This paper deals to how to estimate points non measured spatial data when the number of its terms (sample spatial) a few, that are not preferred for the estimation process, because we also know that whenever if the data is large, the estimation results of the points non measured to be better and thus the variance estimate less, so the idea of this paper is how to take advantage of the data other secondary (auxiliary), which have a strong correlation with the primary data (basic) to be estimated single points of non-measured, as well as measuring the variance estimate, has been the use of technique Co-kriging in this field to build predictions spatial estimation process, and then we applied this idea to real data in the cultivation of wheat crop in Iraq, where he was be considered the amount of production is the basic data (variable primary) and want to estimate a single points of non measured and the cultivated area (variable secondary) has been programming all calculations language Matlab
Today, the role of cloud computing in our day-to-day lives is very prominent. The cloud computing paradigm makes it possible to provide demand-based resources. Cloud computing has changed the way that organizations manage resources due to their robustness, low cost, and pervasive nature. Data security is usually realized using different methods such as encryption. However, the privacy of data is another important challenge that should be considered when transporting, storing, and analyzing data in the public cloud. In this paper, a new method is proposed to track malicious users who use their private key to decrypt data in a system, share it with others and cause system information leakage. Security policies are also considered to be int
... Show MoreThe current study aimed to determine the relation between the lead levels in the blood traffic men and the nature of their traffic work in Baghdad governorate. Blood samples were collected from 10 traffic men and the age about from 20-39 year from Directorate of Traffic Al Rusafa/ Baghdad and 10 samples another control from traffic men too with age 30-49 year and they livedrelatively in the clear cities or contained of Very few traffic areas. The levels of lead in blood estimated by used Atomic Absorption Spectrometry.
The result stated that there is no rising of the levels of lead in blood of traffic men Lead concentration was with more a range from 14 ppm in Traffic police are not healthy They are within the permissible limits, Ap
Variable selection is an essential and necessary task in the statistical modeling field. Several studies have triedto develop and standardize the process of variable selection, but it isdifficultto do so. The first question a researcher needs to ask himself/herself what are the most significant variables that should be used to describe a given dataset’s response. In thispaper, a new method for variable selection using Gibbs sampler techniqueshas beendeveloped.First, the model is defined, and the posterior distributions for all the parameters are derived.The new variable selection methodis tested usingfour simulation datasets. The new approachiscompared with some existingtechniques: Ordinary Least Squared (OLS), Least Absolute Shrinkage
... Show MoreThrough recent years many researchers have developed methods to estimate the self-similarity and long memory parameter that is best known as the Hurst parameter. In this paper, we set a comparison between nine different methods. Most of them use the deviations slope to find an estimate for the Hurst parameter like Rescaled range (R/S), Aggregate Variance (AV), and Absolute moments (AM), and some depend on filtration technique like Discrete Variations (DV), Variance versus level using wavelets (VVL) and Second-order discrete derivative using wavelets (SODDW) were the comparison set by a simulation study to find the most efficient method through MASE. The results of simulation experiments were shown that the performance of the meth
... Show MoreIn this study, we review the ARIMA (p, d, q), the EWMA and the DLM (dynamic linear moodelling) procedures in brief in order to accomdate the ac(autocorrelation) structure of data .We consider the recursive estimation and prediction algorithms based on Bayes and KF (Kalman filtering) techniques for correlated observations.We investigate the effect on the MSE of these procedures and compare them using generated data.
The study included examination of three types of different origin and orange juice at the rate of recurring per sample, the results showed that the highest rates of acid (pH) in the A and juice were (4). And salts of calcium is 120 ppm in juice C and 86 ppm of magnesium in the juice B, for heavy metals the highest rate of lead .18 recorded ppm in juice B, 1.32 ppm of copper in juice A, 5 ppm of iron in the juice B, 1.3 ppm of zinc in the juice B, 0.05 ppm of aluminum in each of the sappy B and A, 0.02 ppm of cobalt in the juice B, 0.3 ppm of nickel in the juice B, 170.6 ppm sodium in C juice, but for the acids, organic that the highest rates were 3.2 part Millions of acid in the juice owner a, 260 ppm of the acid in the juice the ascorbi
... Show MoreThis paper deals with constructing a model of fuzzy linear programming with application on fuels product of Dura- refinery , which consist of seven products that have direct effect ondaily consumption . After Building the model which consist of objective function represents the selling prices ofthe products and fuzzy productions constraints and fuzzy demand constraints addition to production requirements constraints , we used program of ( WIN QSB ) to find the optimal solution
Abstract
The grey system model GM(1,1) is the model of the prediction of the time series and the basis of the grey theory. This research presents the methods for estimating parameters of the grey model GM(1,1) is the accumulative method (ACC), the exponential method (EXP), modified exponential method (Mod EXP) and the Particle Swarm Optimization method (PSO). These methods were compared based on the Mean square error (MSE) and the Mean Absolute percentage error (MAPE) as a basis comparator and the simulation method was adopted for the best of the four methods, The best method was obtained and then applied to real data. This data represents the consumption rate of two types of oils a he
... Show MoreBusiness organizations have faced many challenges in recent times, most important of which is information technology, because it is widely spread and easy to use. Its use has led to an increase in the amount of data that business organizations deal with an unprecedented manner. The amount of data available through the internet is a problem that many parties seek to find solutions for. Why is it available there in this huge amount randomly? Many expectations have revealed that in 2017, there will be devices connected to the internet estimated at three times the population of the Earth, and in 2015 more than one and a half billion gigabytes of data was transferred every minute globally. Thus, the so-called data mining emerged as a
... Show MoreBackground: revascularization therapy for patients with left main (LM) and/or three vessel coronary disease is a matter of argument for long a time whether bypercutaneous coronary angiography orcoronary artery bypass grafting. SYNTAX trial was designed to assess the optimal revascularization strategy between percutaneous coronary intervention and coronary artery bypass grafting, for patients with left main stem coronary artery disease and/or 3-vessel coronary disease.
Aim: To estimate the complexity of coronary artery disease in patients referred to a tertiary Iraqi cardiac center and its effect on mode of revascularization.
Patients and Method: Ninety nine patients who w
... Show More