Most of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve Bayesian classifier (NBC) have been enhanced as compared to the dataset before applying the proposed method. Moreover, the results indicated that issa was performed better than the statistical imputation techniques such as deleting the samples with missing values, replacing the missing values with zeros, mean, or random values.
Several recent approaches focused on the developing of traditional systems to measure the costs to meet the new environmental requirements, including Attributes Based Costing (ABCII). It is method of accounting is based on measuring the costs according to the Attributes that the product is designed on this basis and according to achievement levels of all the Attribute of the product attributes. This research provides the knowledge foundations of this approach and its role in the market-oriented compared to the Activity based costing as shown in steps to be followed to apply for this Approach. The research problem in the attempt to reach the most accurate Approach in the measurement of the cost of products from th
... Show MoreThe behavior and shear strength of full-scale (T-section) reinforced concrete deep beams, designed according to the strut-and-tie approach of ACI Code-19 specifications, with various large web openings were investigated in this paper. A total of 7 deep beam specimens with identical shear span-to-depth ratios have been tested under mid-span concentrated load applied monotonically until beam failure. The main variables studied were the effects of width and depth of the web openings on deep beam performance. Experimental data results were calibrated with the strut-and-tie approach, adopted by ACI 318-19 code for the design of deep beams. The provided strut-and-tie design model in ACI 318-19 code provision was assessed and found to be u
... Show MoreThe research aims to recognize the impact of the training program based on integrating future thinking skills and classroom interaction patterns for mathematics teachers and providing their students with creative solution skills. To achieve the goal of the research, the following hypothesis was formulated: There is no statistically significant difference at the level (0.05) between the mean scores of students of mathematics teachers whose teachers trained according to the proposed training program (the experimental group) and whose teachers were not trained according to the proposed training program (the control group) in Pre-post creative solution skills test. Research sample is consisted of (31) teachers and schools were distribut
... Show MoreThere are many methods of searching large amount of data to find one particular piece of information. Such as find name of person in record of mobile. Certain methods of organizing data make the search process more efficient the objective of these methods is to find the element with least cost (least time). Binary search algorithm is faster than sequential and other commonly used search algorithms. This research develops binary search algorithm by using new structure called Triple, structure in this structure data are represented as triple. It consists of three locations (1-Top, 2-Left, and 3-Right) Binary search algorithm divide the search interval in half, this process makes the maximum number of comparisons (Average case com
... Show Moreطريقة سهلة وبسيطة ودقيقة لتقدير السبروفلوكساسين في وجود السيفاليكسين او العكس بالعكس في خليط منهما. طبقت الطريقة المقترحة بطريقة الاضافة القياسية لنقطة بنجاح في تقدير السبروفلوكساسين بوجود السيفاليكسين كمتداخل عند الاطوال الموجية 240-272.3 نانوميتر وبتراكيز مختلفة من السبروفلوكساسين 4-18 مايكروغرام . مل-1 وكذلك تقدير السيفاليكسين بوجود السبروفلوكساسين الذي يتداخل باطوال موجية 262-285.7 نانوميتر وبتراكيز مخ
... Show MoreThis research a study model of linear regression problem of autocorrelation of random error is spread when a normal distribution as used in linear regression analysis for relationship between variables and through this relationship can predict the value of a variable with the values of other variables, and was comparing methods (method of least squares, method of the average un-weighted, Thiel method and Laplace method) using the mean square error (MSE) boxes and simulation and the study included fore sizes of samples (15, 30, 60, 100). The results showed that the least-squares method is best, applying the fore methods of buckwheat production data and the cultivated area of the provinces of Iraq for years (2010), (2011), (2012),
... Show MoreThe logistic regression model is one of the oldest and most common of the regression models, and it is known as one of the statistical methods used to describe and estimate the relationship between a dependent random variable and explanatory random variables. Several methods are used to estimate this model, including the bootstrap method, which is one of the estimation methods that depend on the principle of sampling with return, and is represented by a sample reshaping that includes (n) of the elements drawn by randomly returning from (N) from the original data, It is a computational method used to determine the measure of accuracy to estimate the statistics, and for this reason, this method was used to find more accurate estimates. The ma
... Show MoreThe haplotype association analysis has been proposed to capture the collective behavior of sets of variants by testing the association of each set instead of individual variants with the disease.Such an analysis typically involves a list of unphased multiple-locus genotypes with potentially sparse frequencies in cases and controls.It starts with inferring haplotypes from genotypes followed by a haplotype co-classification and marginal screening for disease-associated haplotypes.Unfortunately,phasing uncertainty may have a strong effects on the haplotype co-classification and therefore on the accuracy of predicting risk haplotypes.Here,to address the issue,we propose an alternative approach:In Stage 1,we select potential risk genotypes inste
... Show More