Recurrent strokes can be devastating, often resulting in severe disability or death. However, nearly 90% of the causes of recurrent stroke are modifiable, which means recurrent strokes can be averted by controlling risk factors, which are mainly behavioral and metabolic in nature. Thus, it shows that from the previous works that recurrent stroke prediction model could help in minimizing the possibility of getting recurrent stroke. Previous works have shown promising results in predicting first-time stroke cases with machine learning approaches. However, there are limited works on recurrent stroke prediction using machine learning methods. Hence, this work is proposed to perform an empirical analysis and to investigate machine learning algorithms implementation in the recurrent stroke prediction models. This research aims to investigate and compare the performance of machine learning algorithms using recurrent stroke clinical public datasets. In this study, Artificial Neural Network (ANN), Support Vector Machine (SVM) and Bayesian Rule List (BRL) are used and compared their performance in the domain of recurrent stroke prediction model. The result of the empirical experiments shows that ANN scores the highest accuracy at 80.00%, follows by BRL with 75.91% and SVM with 60.45%.
Abstract:
The aim of this research to try to determine the type of expected relationship between inflation as the explanatory variable and market performance as a dependent variable, for that used data issued and published by the Central Bank of Iraq and the Iraqi Stock Exchange for a sample consisting of (159) observations using the intentional or intentional sampling method for the period extending between the months (January 2010 to March 2023), in the light of each of the Consumer Price Index (CPI), the Iraqi Stock Exchange Index, the number of traded shares and the number of market capital shares to ex
... Show MoreLung cancer is one of the most serious and prevalent diseases, causing many deaths each year. Though CT scan images are mostly used in the diagnosis of cancer, the assessment of scans is an error-prone and time-consuming task. Machine learning and AI-based models can identify and classify types of lung cancer quite accurately, which helps in the early-stage detection of lung cancer that can increase the survival rate. In this paper, Convolutional Neural Network is used to classify Adenocarcinoma, squamous cell carcinoma and normal case CT scan images from the Chest CT Scan Images Dataset using different combinations of hidden layers and parameters in CNN models. The proposed model was trained on 1000 CT Scan Images of cancerous and non-c
... Show MoreRetinopathy of prematurity (ROP) can cause blindness in premature neonates. It is diagnosed when new blood vessels form abnormally in the retina. However, people at high risk of ROP might benefit significantly from early detection and treatment. Therefore, early diagnosis of ROP is vital in averting visual impairment. However, due to a lack of medical experience in detecting this condition, many people refuse treatment; this is especially troublesome given the rising cases of ROP. To deal with this problem, we trained three transfer learning models (VGG-19, ResNet-50, and EfficientNetB5) and a convolutional neural network (CNN) to identify the zones of ROP in preterm newborns. The dataset to train th
The present study was conducted to investigate the relationship between critical thinking, epistemological beliefs, and learning strategies with the academic performance of high school first-grade male and female students in Yazd. For this purpose, from among all first-grade students, as many as 250 students (130 females and 120 males) were selected by using multistage cluster sampling. The data needed were then collected through using California Critical Thinking Skills Test, Schommer's Epistemological Beliefs Questionnaire, Biggs’ Revised Two Factor Study Process Questionnaire. The findings indicated that there is a positive significant relationship between critical thinking and academic performance and achievement. Moreover, four fa
... Show MoreProblem: Cancer is regarded as one of the world's deadliest diseases. Machine learning and its new branch (deep learning) algorithms can facilitate the way of dealing with cancer, especially in the field of cancer prevention and detection. Traditional ways of analyzing cancer data have their limits, and cancer data is growing quickly. This makes it possible for deep learning to move forward with its powerful abilities to analyze and process cancer data. Aims: In the current study, a deep-learning medical support system for the prediction of lung cancer is presented. Methods: The study uses three different deep learning models (EfficientNetB3, ResNet50 and ResNet101) with the transfer learning concept. The three models are trained using a
... Show MoreIn regression testing, Test case prioritization (TCP) is a technique to arrange all the available test cases. TCP techniques can improve fault detection performance which is measured by the average percentage of fault detection (APFD). History-based TCP is one of the TCP techniques that consider the history of past data to prioritize test cases. The issue of equal priority allocation to test cases is a common problem for most TCP techniques. However, this problem has not been explored in history-based TCP techniques. To solve this problem in regression testing, most of the researchers resort to random sorting of test cases. This study aims to investigate equal priority in history-based TCP techniques. The first objective is to implement
... Show MoreAbstract
Business organizations are using the technological innovations like cloud computing (CC) as a developmental platform in order to improve the performance of their information systems. In that context, our paper discusses know-how in employing the public and private CC to serve as platforms to develop the evaluation system of annual employees' performance (ESAEP) at Iraqi universities. Therefore, we ask the paper question which is “Is it possible to adopt the innovative solutions of ICTs (Like: public and private CC) for finding the developmental vision about management information systems at business organizations?”. In addition, the paper aim
... Show MoreMachine learning has a significant advantage for many difficulties in the oil and gas industry, especially when it comes to resolving complex challenges in reservoir characterization. Permeability is one of the most difficult petrophysical parameters to predict using conventional logging techniques. Clarifications of the work flow methodology are presented alongside comprehensive models in this study. The purpose of this study is to provide a more robust technique for predicting permeability; previous studies on the Bazirgan field have attempted to do so, but their estimates have been vague, and the methods they give are obsolete and do not make any concessions to the real or rigid in order to solve the permeability computation. To
... Show MorePermeability estimation is a vital step in reservoir engineering due to its effect on reservoir's characterization, planning for perforations, and economic efficiency of the reservoirs. The core and well-logging data are the main sources of permeability measuring and calculating respectively. There are multiple methods to predict permeability such as classic, empirical, and geostatistical methods. In this research, two statistical approaches have been applied and compared for permeability prediction: Multiple Linear Regression and Random Forest, given the (M) reservoir interval in the (BH) Oil Field in the northern part of Iraq. The dataset was separated into two subsets: Training and Testing in order to cross-validate the accuracy
... Show More