Text categorization refers to the process of grouping text or documents into classes or categories according to their content. Text categorization process consists of three phases which are: preprocessing, feature extraction and classification. In comparison to the English language, just few studies have been done to categorize and classify the Arabic language. For a variety of applications, such as text classification and clustering, Arabic text representation is a difficult task because Arabic language is noted for its richness, diversity, and complicated morphology. This paper presents a comprehensive analysis and a comparison for researchers in the last five years based on the dataset, year, algorithms and the accuracy they got. Deep Learning (DL) and Machine Learning (ML) models were used to enhance text classification for Arabic language. Remarks for future work were concluded.
Twitter popularity has increasingly grown in the last few years, influencing life’s social, political, and business aspects. People would leave their tweets on social media about an event, and simultaneously inquire to see other people's experiences and whether they had a positive/negative opinion about that event. Sentiment Analysis can be used to obtain this categorization. Product reviews, events, and other topics from all users that comprise unstructured text comments are gathered and categorized as good, harmful, or neutral using sentiment analysis. Such issues are called polarity classifications. This study aims to use Twitter data about OK cuisine reviews obtained from the Amazon website and compare the effectiveness
... Show MoreOne study whose importance has significantly grown in recent years is lip-reading, particularly with the widespread of using deep learning techniques. Lip reading is essential for speech recognition in noisy environments or for those with hearing impairments. It refers to recognizing spoken sentences using visual information acquired from lip movements. Also, the lip area, especially for males, suffers from several problems, such as the mouth area containing the mustache and beard, which may cover the lip area. This paper proposes an automatic lip-reading system to recognize and classify short English sentences spoken by speakers using deep learning networks. The input video extracts frames and each frame is passed to the Viola-Jone
... Show MoreHM Al-Dabbas, RA Azeez, AE Ali, IRAQI JOURNAL OF COMPUTERS, COMMUNICATIONS, CONTROL AND SYSTEMS ENGINEERING, 2023
It is doubtless that the sexual place has some common indicators due to the masculine and feminine bodies which may be natural or deviated (homosexual). The female has an act of voice in the imaginary masculine place whereas the male has an act of image recognized in the parental mind in both the secular and sacred place. Those places create different limits and perceptions according to the auditory and visual readings in search of identity, text and body in the feminine dramatic text.
The research includes four chapters; the first, the methodological framework, involves the problem which is centralized in the following enquiry: What is the relationship between the place and the term of
... Show MoreDisease diagnosis with computer-aided methods has been extensively studied and applied in diagnosing and monitoring of several chronic diseases. Early detection and risk assessment of breast diseases based on clinical data is helpful for doctors to make early diagnosis and monitor the disease progression. The purpose of this study is to exploit the Convolutional Neural Network (CNN) in discriminating breast MRI scans into pathological and healthy. In this study, a fully automated and efficient deep features extraction algorithm that exploits the spatial information obtained from both T2W-TSE and STIR MRI sequences to discriminate between pathological and healthy breast MRI scans. The breast MRI scans are preprocessed prior to the feature
... Show MoreOptimization is the task of minimizing or maximizing an objective function f(x) parameterized by x. A series of effective numerical optimization methods have become popular for improving the performance and efficiency of other methods characterized by high-quality solutions and high convergence speed. In recent years, there are a lot of interest in hybrid metaheuristics, where more than one method is ideally combined into one new method that has the ability to solve many problems rapidly and efficiently. The basic concept of the proposed method is based on the addition of the acceleration part of the Gravity Search Algorithm (GSA) model in the Firefly Algorithm (FA) model and creating new individuals. Some stan
... Show MoreCOVID-19 (Coronavirus disease-2019), commonly called Coronavirus or CoV, is a dangerous disease caused by the SARS-CoV-2 virus. It is one of the most widespread zoonotic diseases around the world, which started from one of the wet markets in Wuhan city. Its symptoms are similar to those of the common flu, including cough, fever, muscle pain, shortness of breath, and fatigue. This article suggests implementing machine learning techniques (Random Forest, Logistic Regression, Naïve Bayes, Support Vector Machine) by Python to classify a series of chest X-ray images that include viral pneumonia, COVID-19, and healthy (Not infected) cases in humans. The study includes more than 1400 images that are collected from the Kaggle platform. The expe
... Show MoreThe proliferation of many editing programs based on artificial intelligence techniques has contributed to the emergence of deepfake technology. Deepfakes are committed to fabricating and falsifying facts by making a person do actions or say words that he never did or said. So that developing an algorithm for deepfakes detection is very important to discriminate real from fake media. Convolutional neural networks (CNNs) are among the most complex classifiers, but choosing the nature of the data fed to these networks is extremely important. For this reason, we capture fine texture details of input data frames using 16 Gabor filters indifferent directions and then feed them to a binary CNN classifier instead of using the red-green-blue
... Show MoreTo date, comprehensive reviews and discussions of the strengths and limitations of Remote Sensing (RS) standalone and combination approaches, and Deep Learning (DL)-based RS datasets in archaeology have been limited. The objective of this paper is, therefore, to review and critically discuss existing studies that have applied these advanced approaches in archaeology, with a specific focus on digital preservation and object detection. RS standalone approaches including range-based and image-based modelling (e.g., laser scanning and SfM photogrammetry) have several disadvantages in terms of spatial resolution, penetrations, textures, colours, and accuracy. These limitations have led some archaeological studies to fuse/integrate multip
... Show More