Most of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve Bayesian classifier (NBC) have been enhanced as compared to the dataset before applying the proposed method. Moreover, the results indicated that issa was performed better than the statistical imputation techniques such as deleting the samples with missing values, replacing the missing values with zeros, mean, or random values.
People with diabetes can develop different foot problems. In the blood stream glucose reacts with hemoglobin to make a glycosylated hemoglobin molecule called hemoglobin A1c or HbA1c, the more glucose in the blood the more hemoglobin A1c will be present in the blood. The HbAlc test is currently one of the best ways to check diabetes to be under control. The aim of study is to compare between the blood investigations which includes the fasting blood sugar and HbAlC (glycosylated hemoglobin), and to evaluate the benefit of HbAlc (measurement for diabetic patients with foot ulcer, to be a good indicator for controlling blood glucose). Sixty patients with type2 diabetes mellitus from the outpatient clinic of Baghdad Teachin
... Show MoreThis paper presents a hybrid genetic algorithm (hGA) for optimizing the maximum likelihood function ln(L(phi(1),theta(1)))of the mixed model ARMA(1,1). The presented hybrid genetic algorithm (hGA) couples two processes: the canonical genetic algorithm (cGA) composed of three main steps: selection, local recombination and mutation, with the local search algorithm represent by steepest descent algorithm (sDA) which is defined by three basic parameters: frequency, probability, and number of local search iterations. The experimental design is based on simulating the cGA, hGA, and sDA algorithms with different values of model parameters, and sample size(n). The study contains comparison among these algorithms depending on MSE value. One can conc
... Show MoreCredit risk assessment has become an important topic in financial risk administration. Fuzzy clustering analysis has been applied in credit scoring. Gustafson-Kessel (GK) algorithm has been utilised to cluster creditworthy customers as against non-creditworthy ones. A good clustering analysis implemented by good Initial Centres of clusters should be selected. To overcome this problem of Gustafson-Kessel (GK) algorithm, we proposed a modified version of Kohonen Network (KN) algorithm to select the initial centres. Utilising similar degree between points to get similarity density, and then by means of maximum density points selecting; the modified Kohonen Network method generate clustering initial centres to get more reasonable clustering res
... Show MoreThe purpose of this paper is to apply different transportation models in their minimum and maximum values by finding starting basic feasible solution and finding the optimal solution. The requirements of transportation models were presented with one of their applications in the case of minimizing the objective function, which was conducted by the researcher as real data, which took place one month in 2015, in one of the poultry farms for the production of eggs
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for
Nowadays, it is quite usual to transmit data through the internet, making safe online communication essential and transmitting data over internet channels requires maintaining its confidentiality and ensuring the integrity of the transmitted data from unauthorized individuals. The two most common techniques for supplying security are cryptography and steganography. Data is converted from a readable format into an unreadable one using cryptography. Steganography is the technique of hiding sensitive information in digital media including image, audio, and video. In our proposed system, both encryption and hiding techniques will be utilized. This study presents encryption using the S-DES algorithm, which generates a new key in each cyc
... Show MoreThe emergence of COVID-19 has resulted in an unprecedented escalation in different aspects of human activities, including medical education. Students and educators across academic institutions have confronted various challenges in following the guidelines of protection against the disease on one hand and accomplishing learning curricula on the other hand. In this short view, we presented our experience in implementing e-learning to the undergraduate nursing students during the present COVID-19 pandemic emphasizing the learning content, barriers, and feedback of students and educators. We hope that this view will trigger the preparedness of nursing faculties in Iraq to deal with this new modality of learning and improve it should t
... Show MoreGas-lift technique plays an important role in sustaining oil production, especially from a mature field when the reservoirs’ natural energy becomes insufficient. However, optimally allocation of the gas injection rate in a large field through its gas-lift network system towards maximization of oil production rate is a challenging task. The conventional gas-lift optimization problems may become inefficient and incapable of modelling the gas-lift optimization in a large network system with problems associated with multi-objective, multi-constrained, and limited gas injection rate. The key objective of this study is to assess the feasibility of utilizing the Genetic Algorithm (GA) technique to optimize t