Diabetes is one of the increasing chronic diseases, affecting millions of people around the earth. Diabetes diagnosis, its prediction, proper cure, and management are compulsory. Machine learning-based prediction techniques for diabetes data analysis can help in the early detection and prediction of the disease and its consequences such as hypo/hyperglycemia. In this paper, we explored the diabetes dataset collected from the medical records of one thousand Iraqi patients. We applied three classifiers, the multilayer perceptron, the KNN and the Random Forest. We involved two experiments: the first experiment used all 12 features of the dataset. The Random Forest outperforms others with 98.8% accuracy. The second experiment used only five attributes of the training process. The results of the second experiment showed improvement in the performance of the KNN and the Multilayer Perceptron. The results of the second experiment showed a slight decrease in the performance of the Random Forest with 97.5 % accuracy.
In any natural area or water body, evapotranspiration is one of the main outcomes in the water balance equation. It is also a crucial component of the hydrologic cycle and considers as the main requirement in the planning and designing of any irrigation project. The climatic parameters for the Ishaqi area are calculated from the available date of Samarra and Al-Khlais meteorological stations according to a method for the period (1982–2017) according to Fetter method. The results of the mean of rainfall, relative humidity temperature, evaporation, sunshine, and wind speed of the Ishaqi area are 171.96 mm, 49.67%, 24.86 C°, 1733.61 mm, 8.34 h/day, and 2.3 m/sec, respectively. Values of Potential Evapotranspiration are determined by
... Show MoreBreast cancer has got much attention in the recent years as it is a one of the complex diseases that can threaten people lives. It can be determined from the levels of secreted proteins in the blood. In this project, we developed a method of finding a threshold to classify the probability of being affected by it in a population based on the levels of the related proteins in relatively small case-control samples. We applied our method to simulated and real data. The results showed that the method we used was accurate in estimating the probability of being diseased in both simulation and real data. Moreover, we were able to calculate the sensitivity and specificity under the null hypothesis of our research question of being diseased o
... Show MoreFractal geometry is receiving increase attention as a quantitative and qualitative model for natural phenomena description, which can establish an active classification technique when applied on satellite images. In this paper, a satellite image is used which was taken by Quick Bird that contains different visible classes. After pre-processing, this image passes through two stages: segmentation and classification. The segmentation carried out by hybrid two methods used to produce effective results; the two methods are Quadtree method that operated inside Horizontal-Vertical method. The hybrid method is segmented the image into two rectangular blocks, either horizontally or vertically depending on spectral uniformity crit
... Show MoreIn this paper, the discriminant analysis is used to classify the most wide spread heart diseases known as coronary heart diseases into two groups (patient, not patient) based on the changes of discrimination features of ten predictor variables that we believe they cause the disease . A random sample for each group is employed and the stepwise procedures are performed in order to delete those variables that are not important for separating the groups. Tests of significance of discriminant analysis and estimating the misclassification rates are performed
The distribution of the intensity of the comet Ison C/2013 is studied by taking its histogram. This distribution reveals four distinct regions that related to the background, tail, coma and nucleus. One dimensional temperature distribution fitting is achieved by using two mathematical equations that related to the coordinate of the center of the comet. The quiver plot of the gradient of the comet shows very clearly that arrows headed towards the maximum intensity of the comet.
A new distribution, the Epsilon Skew Gamma (ESΓ ) distribution, which was first introduced by Abdulah [1], is used on a near Gamma data. We first redefine the ESΓ distribution, its properties, and characteristics, and then we estimate its parameters using the maximum likelihood and moment estimators. We finally use these estimators to fit the data with the ESΓ distribution
This research deals with a shrinking method concerned with the principal components similar to that one which used in the multiple regression “Least Absolute Shrinkage and Selection: LASS”. The goal here is to make an uncorrelated linear combinations from only a subset of explanatory variables that may have a multicollinearity problem instead taking the whole number say, (K) of them. This shrinkage will force some coefficients to equal zero, after making some restriction on them by some "tuning parameter" say, (t) which balances the bias and variance amount from side, and doesn't exceed the acceptable percent explained variance of these components. This had been shown by MSE criterion in the regression case and the percent explained
... Show MoreIn this study, we review the ARIMA (p, d, q), the EWMA and the DLM (dynamic linear moodelling) procedures in brief in order to accomdate the ac(autocorrelation) structure of data .We consider the recursive estimation and prediction algorithms based on Bayes and KF (Kalman filtering) techniques for correlated observations.We investigate the effect on the MSE of these procedures and compare them using generated data.
Abstract:
Research Topic: Ruling on the sale of big data
Its objectives: a statement of what it is, importance, source and governance.
The methodology of the curriculum is inductive, comparative and critical
One of the most important results: it is not permissible to attack it and it is a valuable money, and it is permissible to sell big data as long as it does not contain data to users who are not satisfied with selling it
Recommendation: Follow-up of studies dealing with the provisions of the issue
Subject Terms
Judgment, Sale, Data, Mega, Sayings, Jurists
Research aims to shed light on the concept of corporate failures , display and analysis the most distinctive models used to predicting corporate failure; with suggesting a model to reveal the probabilities of corporate failures which including internal and external financial and non-financial indicators, A tested is made for the research objectivity and its indicators weight and by a number of academics professionals experts, in addition to financial analysts and have concluded a set of conclusions , the most distinctive of them that failure is not considered a sudden phenomena for the company and its stakeholders , it is an Event passes through numerous stages; each have their symptoms that lead eve
... Show More