Lung cancer is the most common dangerous disease that, if treated late, can lead to death. It is more likely to be treated if successfully discovered at an early stage before it worsens. Distinguishing the size, shape, and location of lymphatic nodes can identify the spread of the disease around these nodes. Thus, identifying lung cancer at the early stage is remarkably helpful for doctors. Lung cancer can be diagnosed successfully by expert doctors; however, their limited experience may lead to misdiagnosis and cause medical issues in patients. In the line of computer-assisted systems, many methods and strategies can be used to predict the cancer malignancy level that plays a significant role to provide precise abnormality detection. In this paper, the use of modern learning machine-based approaches was explored. More than 70 state-of-the-art articles (from 2019 to 2024) were extensively explored to highlight the different machine learning and deep learning (DL) techniques of different models used for the detection, classification, and prediction of cancerous lung tumors. The efficient model of Tiny DL must be built to assist physicians who are working in rural medical centers for swift and rapid diagnosis of lung cancer. The combination of lightweight Convolutional Neural Networks and limited resources could produce a portable model with low computational cost that has the ability to substitute the skill and experience of doctors needed in urgent cases.
Data scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for
To expedite the learning process, a group of algorithms known as parallel machine learning algorithmscan be executed simultaneously on several computers or processors. As data grows in both size andcomplexity, and as businesses seek efficient ways to mine that data for insights, algorithms like thesewill become increasingly crucial. Data parallelism, model parallelism, and hybrid techniques are justsome of the methods described in this article for speeding up machine learning algorithms. We alsocover the benefits and threats associated with parallel machine learning, such as data splitting,communication, and scalability. We compare how well various methods perform on a variety ofmachine learning tasks and datasets, and we talk abo
... Show MoreBreast cancer is a heterogeneous disease characterized by molecular complexity. This research utilized three genetic expression profiles—gene expression, deoxyribonucleic acid (DNA) methylation, and micro ribonucleic acid (miRNA) expression—to deepen the understanding of breast cancer biology and contribute to the development of a reliable survival rate prediction model. During the preprocessing phase, principal component analysis (PCA) was applied to reduce the dimensionality of each dataset before computing consensus features across the three omics datasets. By integrating these datasets with the consensus features, the model's ability to uncover deep connections within the data was significantly improved. The proposed multimodal deep
... Show MoreDiabetes is one of the increasing chronic diseases, affecting millions of people around the earth. Diabetes diagnosis, its prediction, proper cure, and management are compulsory. Machine learning-based prediction techniques for diabetes data analysis can help in the early detection and prediction of the disease and its consequences such as hypo/hyperglycemia. In this paper, we explored the diabetes dataset collected from the medical records of one thousand Iraqi patients. We applied three classifiers, the multilayer perceptron, the KNN and the Random Forest. We involved two experiments: the first experiment used all 12 features of the dataset. The Random Forest outperforms others with 98.8% accuracy. The second experiment used only five att
... Show MoreThe need to constantly and consistently improve the quality and quantity of the educational system is essential. E-learning has emerged from the rapid cycle of change and the expansion of new technologies. Advances in information technology have increased network bandwidth, data access speed, and reduced data storage costs. In recent years, the implementation of cloud computing in educational settings has garnered the interest of major companies, leading to substantial investments in this area. Cloud computing improves engineering education by providing an environment that can be accessed from anywhere and allowing access to educational resources on demand. Cloud computing is a term used to describe the provision of hosting services
... Show MoreAnalyzing sentiment and emotions in Arabic texts on social networking sites has gained wide interest from researchers. It has been an active research topic in recent years due to its importance in analyzing reviewers' opinions. The Iraqi dialect is one of the Arabic dialects used in social networking sites, characterized by its complexity and, therefore, the difficulty of analyzing sentiment. This work presents a hybrid deep learning model consisting of a Convolution Neural Network (CNN) and the Gated Recurrent Units (GRU) to analyze sentiment and emotions in Iraqi texts. Three Iraqi datasets (Iraqi Arab Emotions Data Set (IAEDS), Annotated Corpus of Mesopotamian-Iraqi Dialect (ACMID), and Iraqi Arabic Dataset (IAD)) col
... Show MoreThe proliferation of many editing programs based on artificial intelligence techniques has contributed to the emergence of deepfake technology. Deepfakes are committed to fabricating and falsifying facts by making a person do actions or say words that he never did or said. So that developing an algorithm for deepfakes detection is very important to discriminate real from fake media. Convolutional neural networks (CNNs) are among the most complex classifiers, but choosing the nature of the data fed to these networks is extremely important. For this reason, we capture fine texture details of input data frames using 16 Gabor filters indifferent directions and then feed them to a binary CNN classifier instead of using the red-green-blue
... Show More