Diabetes is one of the increasing chronic diseases, affecting millions of people around the earth. Diabetes diagnosis, its prediction, proper cure, and management are compulsory. Machine learning-based prediction techniques for diabetes data analysis can help in the early detection and prediction of the disease and its consequences such as hypo/hyperglycemia. In this paper, we explored the diabetes dataset collected from the medical records of one thousand Iraqi patients. We applied three classifiers, the multilayer perceptron, the KNN and the Random Forest. We involved two experiments: the first experiment used all 12 features of the dataset. The Random Forest outperforms others with 98.8% accuracy. The second experiment used only five attributes of the training process. The results of the second experiment showed improvement in the performance of the KNN and the Multilayer Perceptron. The results of the second experiment showed a slight decrease in the performance of the Random Forest with 97.5 % accuracy.
The study area of Baghdad region and nearby areas lies within the central part of the Mesopotamia plain. It covers about 5700 Km2. The remote sensing techniques are used in order to produce possible Land Use – Land Cover (LULC) map for Baghdad region and nearby areas depending on Landsat TM satellite image 2007. The classification procedure which was developed by USGS used and followed with field checking in 2010. Land Use-land cover digital map is created depending on maximum likelihood classifications (ML) of TM image using ERDAS 9.2.The LULC raster image is converted to vector structure, using Arc GIS 9.3 Program in order to create a digital LULC map. This study showed it is possible to produce a digital map of LULC and it can be co
... Show MoreNecessary and sufficient conditions for the operator equation I AXAX n  ï€* , to have a real positive definite solution X are given. Based on these conditions, some properties of the operator A as well as relation between the solutions X andAare given.
In this paper a method to determine whether an image is forged (spliced) or not is presented. The proposed method is based on a classification model to determine the authenticity of a tested image. Image splicing causes many sharp edges (high frequencies) and discontinuities to appear in the spliced image. Capturing these high frequencies in the wavelet domain rather than in the spatial domain is investigated in this paper. Correlation between high-frequency sub-bands coefficients of Discrete Wavelet Transform (DWT) is also described using co-occurrence matrix. This matrix was an input feature vector to a classifier. The best accuracy of 92.79% and 94.56% on Casia v1.0 and Casia v2.0 datasets respectively was achieved. This pe
... Show MoreThis paper presents results about the existence of best approximations via nonexpansive type maps defined on modular spaces.
Nowadays, people's expression on the Internet is no longer limited to text, especially with the rise of the short video boom, leading to the emergence of a large number of modal data such as text, pictures, audio, and video. Compared to single mode data ,the multi-modal data always contains massive information. The mining process of multi-modal information can help computers to better understand human emotional characteristics. However, because the multi-modal data show obvious dynamic time series features, it is necessary to solve the dynamic correlation problem within a single mode and between different modes in the same application scene during the fusion process. To solve this problem, in this paper, a feature extraction framework of
... Show MoreThe dependable and efficient identification of Qin seal script characters is pivotal in the discovery, preservation, and inheritance of the distinctive cultural values embodied by these artifacts. This paper uses image histograms of oriented gradients (HOG) features and an SVM model to discuss a character recognition model for identifying partial and blurred Qin seal script characters. The model achieves accurate recognition on a small, imbalanced dataset. Firstly, a dataset of Qin seal script image samples is established, and Gaussian filtering is employed to remove image noise. Subsequently, the gamma transformation algorithm adjusts the image brightness and enhances the contrast between font structures and image backgrounds. After a s
... Show MoreDerivative spectrophotometry is one of the analytical chemistry techniques used
in the analysis and determination of chemicals and pharmaceuticals. This method is
characterized by simplicity, sensitivity and speed. Derivative of Spectra conducted
in several ways, including optical, electronic and mathematical. This operation
usually be done within spectrophotometer. The paper is based on form of a new
program. The program construction is written in Visual Basic language within
Microsoft Excel. The program is able to transform the first, second, third and fourth
derivatives of data and the return of these derivatives to zero order (normal plot).
The program was applied on experimental (trial) and reals values of su
In this study, we made a comparison between LASSO & SCAD methods, which are two special methods for dealing with models in partial quantile regression. (Nadaraya & Watson Kernel) was used to estimate the non-parametric part ;in addition, the rule of thumb method was used to estimate the smoothing bandwidth (h). Penalty methods proved to be efficient in estimating the regression coefficients, but the SCAD method according to the mean squared error criterion (MSE) was the best after estimating the missing data using the mean imputation method
The huge evolving in the information technologies, especially in the few last decades, has produced an increase in the volume of data on the World Wide Web, which is still growing significantly. Retrieving the relevant information on the Internet or any data source with a query created by a few words has become a big challenge. To override this, query expansion (QE) has an important function in improving the information retrieval (IR), where the original query of user is recreated to a new query by appending new related terms with the same importance. One of the problems of query expansion is the choosing of suitable terms. This problem leads to another challenge of how to retrieve the important documents with high precision, high recall
... Show More