Sentiment analysis is one of the major fields in natural language processing whose main task is to extract sentiments, opinions, attitudes, and emotions from a subjective text. And for its importance in decision making and in people's trust with reviews on web sites, there are many academic researches to address sentiment analysis problems. Deep Learning (DL) is a powerful Machine Learning (ML) technique that has emerged with its ability of feature representation and differentiating data, leading to state-of-the-art prediction results. In recent years, DL has been widely used in sentiment analysis, however, there is scarce in its implementation in the Arabic language field. Most of the previous researches address other languages like English. The proposed model tackles Arabic Sentiment Analysis (ASA) by using a DL approach. ASA is a challenging field where Arabic language has a rich morphological structure more than other languages. In this work, Long Short-Term Memory (LSTM) as a deep neural network has been used for training the model combined with word embedding as a first hidden layer for features extracting. The results show an accuracy of about 82% is achievable using DL method.
— To identify the effect of deep learning strategy on mathematics achievement and practical intelligence among secondary school students during the 2022/2023 academic year. In the research, the experimental research method with two groups (experimental and control) with a post-test were adopted. The research community is represented by the female students of the fifth scientific grade from the first Karkh Education Directorate. (61) female students were intentionally chosen, and they were divided into two groups: an experimental group (30) students who were taught according to the proposed strategy, and a control group (31) students who were taught according to the usual method. For the purpose of collecting data for the experimen
... Show MoreThis research is a study of the difficulties of learning the Arabic language that faces Arabic language learners in the Kurdistan Region, by revealing its types and forms, which can be classified into two categories:
The first type has difficulties related to the educational system, the source of which is the Arabic language itself, the Arabic teacher or the learner studying the Arabic language or the educational curriculum, i.e. educational materials, or the educational process, i.e. the method used in teaching.
The second type: general difficulties related to the political aspect, the source of which is the policy of the Kurdistan Regional Government in marginalizing the Arabic language and replacing the forefront of th
... Show MoreWhenever, the Internet of Things (IoT) applications and devices increased, the capability of the its access frequently stressed. That can lead a significant bottleneck problem for network performance in different layers of an end point to end point (P2P) communication route. So, an appropriate characteristic (i.e., classification) of the time changing traffic prediction has been used to solve this issue. Nevertheless, stills remain at great an open defy. Due to of the most of the presenting solutions depend on machine learning (ML) methods, that though give high calculation cost, where they are not taking into account the fine-accurately flow classification of the IoT devices is needed. Therefore, this paper presents a new model bas
... Show MoreThis paper proposes a better solution for EEG-based brain language signals classification, it is using machine learning and optimization algorithms. This project aims to replace the brain signal classification for language processing tasks by achieving the higher accuracy and speed process. Features extraction is performed using a modified Discrete Wavelet Transform (DWT) in this study which increases the capability of capturing signal characteristics appropriately by decomposing EEG signals into significant frequency components. A Gray Wolf Optimization (GWO) algorithm method is applied to improve the results and select the optimal features which achieves more accurate results by selecting impactful features with maximum relevance
... Show MoreThe COVID-19 pandemic has necessitated new methods for controlling the spread of the virus, and machine learning (ML) holds promise in this regard. Our study aims to explore the latest ML algorithms utilized for COVID-19 prediction, with a focus on their potential to optimize decision-making and resource allocation during peak periods of the pandemic. Our review stands out from others as it concentrates primarily on ML methods for disease prediction.To conduct this scoping review, we performed a Google Scholar literature search using "COVID-19," "prediction," and "machine learning" as keywords, with a custom range from 2020 to 2022. Of the 99 articles that were screened for eligibility, we selected 20 for the final review.Our system
... Show MoreIn this paper, we used four classification methods to classify objects and compareamong these methods, these are K Nearest Neighbor's (KNN), Stochastic Gradient Descentlearning (SGD), Logistic Regression Algorithm(LR), and Multi-Layer Perceptron (MLP). Weused MCOCO dataset for classification and detection the objects, these dataset image wererandomly divided into training and testing datasets at a ratio of 7:3, respectively. In randomlyselect training and testing dataset images, converted the color images to the gray level, thenenhancement these gray images using the histogram equalization method, resize (20 x 20) fordataset image. Principal component analysis (PCA) was used for feature extraction, andfinally apply four classification metho
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for
Abstract The study aimed at reviewing translation theories proposed to address problems in translation studies. To the end, translation theories and their applications were reviewed in different studies with a focus on issues such as critical discourse analysis, cultural specific items and collocation translation.