Sentiment analysis is one of the major fields in natural language processing whose main task is to extract sentiments, opinions, attitudes, and emotions from a subjective text. And for its importance in decision making and in people's trust with reviews on web sites, there are many academic researches to address sentiment analysis problems. Deep Learning (DL) is a powerful Machine Learning (ML) technique that has emerged with its ability of feature representation and differentiating data, leading to state-of-the-art prediction results. In recent years, DL has been widely used in sentiment analysis, however, there is scarce in its implementation in the Arabic language field. Most of the previous researches address other languages like English. The proposed model tackles Arabic Sentiment Analysis (ASA) by using a DL approach. ASA is a challenging field where Arabic language has a rich morphological structure more than other languages. In this work, Long Short-Term Memory (LSTM) as a deep neural network has been used for training the model combined with word embedding as a first hidden layer for features extracting. The results show an accuracy of about 82% is achievable using DL method.
This is a research that deals with one of the topics of Arabic grammar, namely, the plural noun, and it is not hidden from the students the importance of grammatical topics in preserving the tongue from melody, and what it has of fundamental importance in knowing the graphic miracles of the Qur’an, and I called this research:
(plural noun in Arabic a grammatical study)
The research aims to measure, assess and evaluate the efficiency of the directorates of Anbar Municipalities by using the Data Envelopment Analysis method (DEA). This is because the municipality sector is consider an important sector and has a direct contact with the citizen’s life. Provides essential services to citizens. The researcher used a case study method, and the sources of information collection based on data were monthly reports, the research population is represented by the Directorate of Anbar Municipalities, and the research sample consists of 7 municipalities which are different in terms of category and size of different types. The most important conclusion reached by the research i
... Show MoreThe huge amount of documents in the internet led to the rapid need of text classification (TC). TC is used to organize these text documents. In this research paper, a new model is based on Extreme Machine learning (EML) is used. The proposed model consists of many phases including: preprocessing, feature extraction, Multiple Linear Regression (MLR) and ELM. The basic idea of the proposed model is built upon the calculation of feature weights by using MLR. These feature weights with the extracted features introduced as an input to the ELM that produced weighted Extreme Learning Machine (WELM). The results showed a great competence of the proposed WELM compared to the ELM.
Today, the world is living in a time of epidemic diseases that spread unnaturally and infect and kill millions of people worldwide. The COVID-19 virus, which is one of the most well-known epidemic diseases currently spreading, has killed more than six million people as of May 2022. The World Health Organization (WHO) declared the 2019 coronavirus disease (COVID-19) after an outbreak of SARS-CoV-2 infection. COVID-19 is a severe and potentially fatal respiratory disease caused by the SARS-CoV-2 virus, which was first noticed at the end of 2019 in Wuhan city. Artificial intelligence plays a meaningful role in analyzing medical images and giving accurate results that serve healthcare workers, especially X-ray images, which are co
... Show MoreCharacteristic evolving is most serious move that deal with image discrimination. It makes the content of images as ideal as possible. Gaussian blur filter used to eliminate noise and add purity to images. Principal component analysis algorithm is a straightforward and active method to evolve feature vector and to minimize the dimensionality of data set, this paper proposed using the Gaussian blur filter to eliminate noise of images and improve the PCA for feature extraction. The traditional PCA result as total average of recall and precision are (93% ,97%) and for the improved PCA average recall and precision are (98% ,100%), this show that the improved PCA is more effective in recall and precision.
In the field of data security, the critical challenge of preserving sensitive information during its transmission through public channels takes centre stage. Steganography, a method employed to conceal data within various carrier objects such as text, can be proposed to address these security challenges. Text, owing to its extensive usage and constrained bandwidth, stands out as an optimal medium for this purpose. Despite the richness of the Arabic language in its linguistic features, only a small number of studies have explored Arabic text steganography. Arabic text, characterized by its distinctive script and linguistic features, has gained notable attention as a promising domain for steganographic ventures. Arabic text steganography harn
... Show MoreThe research aims to identify the impact of using the electronic participatory learning strategy according to internet programs in learning some basic basketball skills for middle first graders according to the curricular course, and the sample of research was selected in the deliberate way of students The first stage of intermediate school.As for the problem of research, the researchers said that there is a weakness in the levels of school students in terms of teaching basketball skills, which prompted the researchers to create appropriate solutions by using a participatory learning strategy.The researchers imposed statistically significant differences between pre and post-test tests, in favor of the post tests individually and in favor of
... Show MoreSecond language learner may commit many mistakes in the process of second language learning. Throughout the Error Analysis Theory, the present study discusses the problems faced by second language learners whose Kurdish is their native language. At the very stages of language learning, second language learners will recognize the errors committed, yet they would not identify the type, the stage and error type shift in the process of language learning. Depending on their educational background of English as basic module, English department students at the university stage would make phonological, morphological, syntactic, semantic and lexical as well as speech errors. The main cause behind such errors goes back to the cultural differences
... Show MoreThis paper is focusing on reducing the time for text processing operations by taking the advantage of enumerating each string using the multi hashing methodology. Text analysis is an important subject for any system that deals with strings (sequences of characters from an alphabet) and text processing (e.g., word-processor, text editor and other text manipulation systems). Many problems have been arisen when dealing with string operations which consist of an unfixed number of characters (e.g., the execution time); this due to the overhead embedded-operations (like, symbols matching and conversion operations). The execution time largely depends on the string characteristics; especially its length (i.e., the number of characters consisting
... Show MoreThis study aims to show the markers of the Arabic noun(genitive, nun nation articles,
vocative, definite article and predication). These markers distinguish the noun from other
parts of sentence. It alsoaims at showing why these markers are peculiar to nouns.