Diabetes is one of the increasing chronic diseases, affecting millions of people around the earth. Diabetes diagnosis, its prediction, proper cure, and management are compulsory. Machine learning-based prediction techniques for diabetes data analysis can help in the early detection and prediction of the disease and its consequences such as hypo/hyperglycemia. In this paper, we explored the diabetes dataset collected from the medical records of one thousand Iraqi patients. We applied three classifiers, the multilayer perceptron, the KNN and the Random Forest. We involved two experiments: the first experiment used all 12 features of the dataset. The Random Forest outperforms others with 98.8% accuracy. The second experiment used only five attributes of the training process. The results of the second experiment showed improvement in the performance of the KNN and the Multilayer Perceptron. The results of the second experiment showed a slight decrease in the performance of the Random Forest with 97.5 % accuracy.
A new human-based heuristic optimization method, named the Snooker-Based Optimization Algorithm (SBOA), is introduced in this study. The inspiration for this method is drawn from the traits of sales elites—those qualities every salesperson aspires to possess. Typically, salespersons strive to enhance their skills through autonomous learning or by seeking guidance from others. Furthermore, they engage in regular communication with customers to gain approval for their products or services. Building upon this concept, SBOA aims to find the optimal solution within a given search space, traversing all positions to obtain all possible values. To assesses the feasibility and effectiveness of SBOA in comparison to other algorithms, we conducte
... Show MoreIn this research، a comparison has been made between the robust estimators of (M) for the Cubic Smoothing Splines technique، to avoid the problem of abnormality in data or contamination of error، and the traditional estimation method of Cubic Smoothing Splines technique by using two criteria of differentiation which are (MADE، WASE) for different sample sizes and disparity levels to estimate the chronologically different coefficients functions for the balanced longitudinal data which are characterized by observations obtained through (n) from the independent subjects، each one of them is measured repeatedly by group of specific time points (m)،since the frequent measurements within the subjects are almost connected an
... Show MoreThis paper discusses estimating the two scale parameters of Exponential-Rayleigh distribution for singly type one censored data which is one of the most important Rights censored data, using the maximum likelihood estimation method (MLEM) which is one of the most popular and widely used classic methods, based on an iterative procedure such as the Newton-Raphson to find estimated values for these two scale parameters by using real data for COVID-19 was taken from the Iraqi Ministry of Health and Environment, AL-Karkh General Hospital. The duration of the study was in the interval 4/5/2020 until 31/8/2020 equivalent to 120 days, where the number of patients who entered the (study) hospital with sample size is (n=785). The number o
... Show MoreThis paper presents a method to organize memory chips when they are used to build memory systems that have word size wider than 8-bit. Most memory chips have 8-bit word size. When the memory system has to be built from several memory chips of various sizes, this method gives all possible organizations of these chips in the memory system. This paper also suggests a precise definition of the term “memory bank” that is usually used in memory systems. Finally, an illustrative design problem was taken to illustrate the presented method practically
This research is a theoretical study that deals with the presentation of the literature of statistical analysis from the perspective of gender or what is called Engendering Statistics. The researcher relied on a number of UN reports as well as some foreign sources to conduct the current study. Gender statistics are defined as statistics that reflect the differences and inequality of the status of women and men overall domains of life, and their importance stems from the fact that it is an important tool in promoting equality as a necessity for the process of sustainable development and the formulation of national and effective development policies and programs. The empowerment of women and the achievement of equality between men and wome
... Show MoreThe efficient sequencing techniques have significantly increased the number of genomes that are now available, including the Crenarchaeon Sulfolobus solfataricus P2 genome. The genome-scale metabolic pathways in Sulfolobus solfataricus P2 were predicted by implementing the “Pathway Tools†software using MetaCyc database as reference knowledge base. A Pathway/Genome Data Base (PGDB) specific for Sulfolobus solfataricus P2 was created. A curation approach was carried out regarding all the amino acids biosynthetic pathways. Experimental literatures as well as homology-, orthology- and context-based protein function prediction methods were followed for the curation process. The “PathoLogicâ€
... Show MoreBackground/Objectives: The purpose of current research aims to a modified image representation framework for Content-Based Image Retrieval (CBIR) through gray scale input image, Zernike Moments (ZMs) properties, Local Binary Pattern (LBP), Y Color Space, Slantlet Transform (SLT), and Discrete Wavelet Transform (DWT). Methods/Statistical analysis: This study surveyed and analysed three standard datasets WANG V1.0, WANG V2.0, and Caltech 101. The features an image of objects in this sets that belong to 101 classes-with approximately 40-800 images for every category. The suggested infrastructure within the study seeks to present a description and operationalization of the CBIR system through automated attribute extraction system premised on CN
... Show MoreDeepfake is a type of artificial intelligence used to create convincing images, audio, and video hoaxes and it concerns celebrities and everyone because they are easy to manufacture. Deepfake are hard to recognize by people and current approaches, especially high-quality ones. As a defense against Deepfake techniques, various methods to detect Deepfake in images have been suggested. Most of them had limitations, like only working with one face in an image. The face has to be facing forward, with both eyes and the mouth open, depending on what part of the face they worked on. Other than that, a few focus on the impact of pre-processing steps on the detection accuracy of the models. This paper introduces a framework design focused on this asp
... Show MoreThe deployment of UAVs is one of the key challenges in UAV-based communications while using UAVs for IoT applications. In this article, a new scheme for energy efficient data collection with a deadline time for the Internet of things (IoT) using the Unmanned Aerial Vehicles (UAV) is presented. We provided a new data collection method, which was set to collect IoT node data by providing an efficient deployment and mobility of multiple UAV, used to collect data from ground internet of things devices in a given deadline time. In the proposed method, data collection was done with minimum energy consumption of IoTs as well as UAVs. In order to find an optimal solution to this problem, we will first provide a mixed integer linear programming m
... Show MoreAbstract
The research Compared two methods for estimating fourparametersof the compound exponential Weibull - Poisson distribution which are the maximum likelihood method and the Downhill Simplex algorithm. Depending on two data cases, the first one assumed the original data (Non-polluting), while the second one assumeddata contamination. Simulation experimentswere conducted for different sample sizes and initial values of parameters and under different levels of contamination. Downhill Simplex algorithm was found to be the best method for in the estimation of the parameters, the probability function and the reliability function of the compound distribution in cases of natural and contaminateddata.
... Show More