Heart disease is a significant and impactful health condition that ranks as the leading cause of death in many countries. In order to aid physicians in diagnosing cardiovascular diseases, clinical datasets are available for reference. However, with the rise of big data and medical datasets, it has become increasingly challenging for medical practitioners to accurately predict heart disease due to the abundance of unrelated and redundant features that hinder computational complexity and accuracy. As such, this study aims to identify the most discriminative features within high-dimensional datasets while minimizing complexity and improving accuracy through an Extra Tree feature selection based technique. The work study assesses the efficacy of several classification algorithms on four reputable datasets, using both the full features set and the reduced features subset selected through the proposed method. The results show that the feature selection technique achieves outstanding classification accuracy, precision, and recall, with an impressive 97% accuracy when used with the Extra Tree classifier algorithm. The research reveals the promising potential of the feature selection method for improving classifier accuracy by focusing on the most informative features and simultaneously decreasing computational burden.
The aim of the research is to examine the multiple intelligence test item selection based on Howard Gardner's MI model using the Generalized Partial Estimation Form, generalized intelligence. The researcher adopted the scale of multiple intelligences by Kardner, it consists of (102) items with eight sub-scales. The sample consisted of (550) students from Baghdad universities, Technology University, al-Mustansiriyah university, and Iraqi University for the academic year (2019/2020). It was verified assumptions theory response to a single (one-dimensional, local autonomy, the curve of individual characteristics, speed factor and application), and analysis of the data according to specimen partial appreciation of the generalized, and limits
... Show MoreMost of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve B
... Show MoreThe phenomenon of delayed marriage triggered the intention of most researchers and specialists to reveal the social factors associated with the spread of this phenomenon in order to identify the characteristics of that phenomenon and the social factors resulting from it. Thus, the current research aims to identify the social factors most related to the delay in marriage age among working-women at the University of Baghdad, represented by family factors, economic factors-professional, psychological factors – subjective, and environmental factors. The researcher also aims to identify the differences in social factors associated with late marriage age for working women at the University of Baghdad in terms of the type of profession (teach
... Show MoreInternet paths sharing the same congested link can be identified using several shared congestion detection techniques. The new detection technique which is proposed in this paper depends on the previous novel technique (delay correlation with wavelet denoising (DCW) with new denoising method called Discrete Multiwavelet Transform (DMWT) as signal denoising to separate between queuing delay caused by network congestion and delay caused by various other delay variations. The new detection technique provides faster convergence (3 to 5 seconds less than previous novel technique) while using fewer probe packets approximately half numbers than the previous novel technique, so it will reduce the overload on the network caused by probe packets.
... Show MorePerhaps going to watch movies in cinemas today has become different from what it was before. The cinematic film, the clarity of the image and the luster of its colors pulled the rug out from under the most important change that occurred in the structure of the contemporary cinematography, which is the sound. The surround sound environment that immerses viewers in the realism of sound that reaches them from all directions, and for this the researcher found it necessary to shed light on this topic because of its importance, so the research problem was represented in the following question: (How are modern sound systems used in the structure of contemporary feature films?) The theoretical framework included two topics: the first: the dialec
... Show MoreThere is a set of economic factors that affect the rationalization of decisions on unexploited resources within the economic unit and here determines the problem of the search for the question of what economic factors cause the emergence of asymmetric costs, and aims to identify these factors in the costs of adjustment to resources, change in The size of the activity of the economic unit, the general trend of sales change in the previous period, and the economic level of the country. Rh measure the impact of these factors on economic unity, and taking into consideration the impact when formulating decisions.
This paper presents a new algorithm in an important research field which is the semantic word similarity estimation. A new feature-based algorithm is proposed for measuring the word semantic similarity for the Arabic language. It is a highly systematic language where its words exhibit elegant and rigorous logic. The score of sematic similarity between two Arabic words is calculated as a function of their common and total taxonomical features. An Arabic knowledge source is employed for extracting the taxonomical features as a set of all concepts that subsumed the concepts containing the compared words. The previously developed Arabic word benchmark datasets are used for optimizing and evaluating the proposed algorithm. In this paper,
... Show More