Lung cancer is the most common dangerous disease that, if treated late, can lead to death. It is more likely to be treated if successfully discovered at an early stage before it worsens. Distinguishing the size, shape, and location of lymphatic nodes can identify the spread of the disease around these nodes. Thus, identifying lung cancer at the early stage is remarkably helpful for doctors. Lung cancer can be diagnosed successfully by expert doctors; however, their limited experience may lead to misdiagnosis and cause medical issues in patients. In the line of computer-assisted systems, many methods and strategies can be used to predict the cancer malignancy level that plays a significant role to provide precise abnormality detection. In this paper, the use of modern learning machine-based approaches was explored. More than 70 state-of-the-art articles (from 2019 to 2024) were extensively explored to highlight the different machine learning and deep learning (DL) techniques of different models used for the detection, classification, and prediction of cancerous lung tumors. The efficient model of Tiny DL must be built to assist physicians who are working in rural medical centers for swift and rapid diagnosis of lung cancer. The combination of lightweight Convolutional Neural Networks and limited resources could produce a portable model with low computational cost that has the ability to substitute the skill and experience of doctors needed in urgent cases.
Machine learning (ML) is a key component within the broader field of artificial intelligence (AI) that employs statistical methods to empower computers with the ability to learn and make decisions autonomously, without the need for explicit programming. It is founded on the concept that computers can acquire knowledge from data, identify patterns, and draw conclusions with minimal human intervention. The main categories of ML include supervised learning, unsupervised learning, semisupervised learning, and reinforcement learning. Supervised learning involves training models using labelled datasets and comprises two primary forms: classification and regression. Regression is used for continuous output, while classification is employed
... Show MoreArtificial intelligence techniques are reaching us in several forms, some of which are useful but can be exploited in a way that harms us. One of these forms is called deepfakes. Deepfakes is used to completely modify video (or image) content to display something that was not in it originally. The danger of deepfake technology impact on society through the loss of confidence in everything is published. Therefore, in this paper, we focus on deepfakedetection technology from the view of two concepts which are deep learning and forensic tools. The purpose of this survey is to give the reader a deeper overview of i) the environment of deepfake creation and detection, ii) how deep learning and forensic tools contributed to the detection
... Show MoreChannel estimation (CE) is essential for wireless links but becomes progressively onerous as Fifth Generation (5G) Multi-Input Multi-Output (MIMO) systems and extensive fading expand the search space and increase latency. This study redefines CE support as the process of learning to deduce channel type and signal-tonoise ratio (SNR) directly from per-tone Orthogonal Frequency-Division Multiplexing (OFDM) observations,with blind channel state information (CSI). We trained a dual deep model that combined Convolutional Neural Networks (CNNs) with Bidirectional Recurrent Neural Networks (BRNNs). We used a lookup table (LUT) label for channel type (class indices instead of per-tap values) and ordinal supervision for SNR (0–20 dB,5-dB steps). T
... Show MoreThis paper proposes a better solution for EEG-based brain language signals classification, it is using machine learning and optimization algorithms. This project aims to replace the brain signal classification for language processing tasks by achieving the higher accuracy and speed process. Features extraction is performed using a modified Discrete Wavelet Transform (DWT) in this study which increases the capability of capturing signal characteristics appropriately by decomposing EEG signals into significant frequency components. A Gray Wolf Optimization (GWO) algorithm method is applied to improve the results and select the optimal features which achieves more accurate results by selecting impactful features with maximum relevance
... Show MoreThe study examines the root causes of delays that the project manager is unable to resolve or how the decision-maker can identify the best opportunities to get over these obstacles by considering the project constraints defined as the project triangle (cost, time, and quality) in post-disaster reconstruction projects to review the real challenges to overcome these obstacles. The methodology relied on the exploratory description and qualitative data examined. 43 valid questionnaires were distributed to qualified experienced engineers. A list of 49 factors causes was collected from previous international and local studies. A Relative Important Index (RII) is adapted to determine the level of importance of each sub-criterion in the fou
... Show MoreDiabetic retinopathy is an eye disease in diabetic patients due to damage to the small blood vessels in the retina due to high and low blood sugar levels. Accurate detection and classification of Diabetic Retinopathy is an important task in computer-aided diagnosis, especially when planning for diabetic retinopathy surgery. Therefore, this study aims to design an automated model based on deep learning, which helps ophthalmologists detect and classify diabetic retinopathy severity through fundus images. In this work, a deep convolutional neural network (CNN) with transfer learning and fine tunes has been proposed by using pre-trained networks known as Residual Network-50 (ResNet-50). The overall framework of the proposed
... Show MoreGeneral Background: Deep image matting is a fundamental task in computer vision, enabling precise foreground extraction from complex backgrounds, with applications in augmented reality, computer graphics, and video processing. Specific Background: Despite advancements in deep learning-based methods, preserving fine details such as hair and transparency remains a challenge. Knowledge Gap: Existing approaches struggle with accuracy and efficiency, necessitating novel techniques to enhance matting precision. Aims: This study integrates deep learning with fusion techniques to improve alpha matte estimation, proposing a lightweight U-Net model incorporating color-space fusion and preprocessing. Results: Experiments using the AdobeComposition-1k
... Show MoreIn this paper, we used four classification methods to classify objects and compareamong these methods, these are K Nearest Neighbor's (KNN), Stochastic Gradient Descentlearning (SGD), Logistic Regression Algorithm(LR), and Multi-Layer Perceptron (MLP). Weused MCOCO dataset for classification and detection the objects, these dataset image wererandomly divided into training and testing datasets at a ratio of 7:3, respectively. In randomlyselect training and testing dataset images, converted the color images to the gray level, thenenhancement these gray images using the histogram equalization method, resize (20 x 20) fordataset image. Principal component analysis (PCA) was used for feature extraction, andfinally apply four classification metho
... Show More