Data mining is a data analysis process using software to find certain patterns or rules in a large amount of data, which is expected to provide knowledge to support decisions. However, missing value in data mining often leads to a loss of information. The purpose of this study is to improve the performance of data classification with missing values, precisely and accurately. The test method is carried out using the Car Evaluation dataset from the UCI Machine Learning Repository. RStudio and RapidMiner tools were used for testing the algorithm. This study will result in a data analysis of the tested parameters to measure the performance of the algorithm. Using test variations: performance at C5.0, C4.5, and k-NN at 0% missing rate, performance at C5.0, C4.5, and k-NN at 5–50% missing rate, performance at C5.0 + k-NNI, C4.5 + k-NNI, and k-NN + k-NNI classifier at 5–50% missing rate, and performance at C5.0 + CMI, C4.5 + CMI, and k-NN + CMI classifier at 5–50% missing rate, The results show that C5.0 with k-NNI produces better classification accuracy than other tested imputation and classification algorithms. For example, with 35% of the dataset missing, this method obtains 93.40% validation accuracy and 92% test accuracy. C5.0 with k-NNI also offers fast processing times compared with other methods.
Purpose To test the effect of strategic Supremacy on strategic success A case study in Thi Qar Governorate, methodology/approach – this is a mandatory entry.
a case study was used and applied to the Department managers of Government of the province of Thi Qar, Research limitations/implications – It is clear that the strategic Supremacy variable is not being used effectively to achieve strategic success.
Practical implications – use strategic supremacy positively to Support for strategic success. implementing and monitoring ignorance of them in how to use thi
... Show MoreA 3D velocity model was created by using stacking velocity of 9 seismic lines and average velocity of 6 wells drilled in Iraq. The model was achieved by creating a time model to 25 surfaces with an interval time between each two successive surfaces of about 100 msec. The summation time of all surfaces reached about 2400 msec, that was adopted according to West Kifl-1 well, which penetrated to a depth of 6000 m, representing the deepest well in the study area. The seismic lines and well data were converted to build a 3D cube time model and the velocity was spread on the model. The seismic inversion modeling of the elastic properties of the horizon and well data was applied to achieve a corrected veloci
... Show MoreThis study examines the causes of time delays and cost overruns in a selection of thirty post-disaster reconstruction projects in Iraq. Although delay factors have been studied in many countries and contexts, little data exists from countries under the conditions characterizing Iraq during the last 10-15 years. A case study approach was used, with thirty construction projects of different types and sizes selected from the Baghdad region. Project data was gathered from a survey which was used to build statistical relationships between time and cost delay ratios and delay factors in post disaster projects. The most important delay factors identified were contractor failure, redesigning of designs/plans and change orders, security is
... Show MoreThe depth conversion process is a significant task in seismic interpretation to establish the link between the seismic data in the time domain and the drilled wells in the depth domain. To promote the exploration and development of the Subba oilfield, more accurate depth conversion is required. In this paper, three approaches of depth conversions: Models 1, 2, and 3 are applied from the simplest to the most complex on Nahr Umr Reservoir in Suba oilfield. This is to obtain the best approach, giving less mistakes with the actual depth at well locations and good inter/extrapolation between or away from well controls. The results of these approaches, together with the uncertainty analysis provide a reliable velocity model
... Show MoreThe dramatic decrease in the cost of genome sequencing over the last two decades has led to an abundance of genomic data. This data has been used in research related to the discovery of genetic diseases and the production of medicines. At the same time, the huge space for storing the genome (2–3 GB) has led to it being considered one of the most important sources of big data, which has prompted research centers concerned with genetic research to take advantage of the cloud and its services in storing and managing this data. The cloud is a shared storage environment, which makes data stored in it vulnerable to unwanted tampering or disclosure. This leads to serious concerns about securing such data from tampering and unauthoriz
... Show MoreObjectives: The purpose of the study is to ascertain the relationship between the training program and the socio-demographic features of patients with peptic ulcers in order to assess the efficiency of the program on patients' nutritional habits.
Methodology: Between January 17 and October 30 of 2022, The Center of Gastrointestinal Medicine and Surgery at Al-Diwanyiah Teaching Hospital conducted "a quasi-experimental study". A non-probability sample of 30 patients for the case group and 30 patients for the control group was selected based on the study's criteria. The study instrument was divided into 4 sections: the first portion contained 7 questions about demographic information, the second sect
... Show MoreIn this paper, three approximate methods namely the Bernoulli, the Bernstein, and the shifted Legendre polynomials operational matrices are presented to solve two important nonlinear ordinary differential equations that appeared in engineering and applied science. The Riccati and the Darcy-Brinkman-Forchheimer moment equations are solved and the approximate solutions are obtained. The methods are summarized by converting the nonlinear differential equations into a nonlinear system of algebraic equations that is solved using Mathematica®12. The efficiency of these methods was investigated by calculating the root mean square error (RMS) and the maximum error remainder (𝑀𝐸𝑅n) and it was found that the accuracy increases with increasi
... Show MoreIn data transmission a change in single bit in the received data may lead to miss understanding or a disaster. Each bit in the sent information has high priority especially with information such as the address of the receiver. The importance of error detection with each single change is a key issue in data transmission field.
The ordinary single parity detection method can detect odd number of errors efficiently, but fails with even number of errors. Other detection methods such as two-dimensional and checksum showed better results and failed to cope with the increasing number of errors.
Two novel methods were suggested to detect the binary bit change errors when transmitting data in a noisy media.Those methods were: 2D-Checksum me