Text categorization refers to the process of grouping text or documents into classes or categories according to their content. Text categorization process consists of three phases which are: preprocessing, feature extraction and classification. In comparison to the English language, just few studies have been done to categorize and classify the Arabic language. For a variety of applications, such as text classification and clustering, Arabic text representation is a difficult task because Arabic language is noted for its richness, diversity, and complicated morphology. This paper presents a comprehensive analysis and a comparison for researchers in the last five years based on the dataset, year, algorithms and the accuracy they got. Deep Learning (DL) and Machine Learning (ML) models were used to enhance text classification for Arabic language. Remarks for future work were concluded.
To maintain the security and integrity of data, with the growth of the Internet and the increasing prevalence of transmission channels, it is necessary to strengthen security and develop several algorithms. The substitution scheme is the Playfair cipher. The traditional Playfair scheme uses a small 5*5 matrix containing only uppercase letters, making it vulnerable to hackers and cryptanalysis. In this study, a new encryption and decryption approach is proposed to enhance the resistance of the Playfair cipher. For this purpose, the development of symmetric cryptography based on shared secrets is desired. The proposed Playfair method uses a 5*5 keyword matrix for English and a 6*6 keyword matrix for Arabic to encrypt the alphabets of
... Show MoreThis study aimed to assess orthodontic postgraduate students’ use of social media during the COVID-19 lockdown. Ninety-four postgraduate students (67 master’s students and 27 doctoral students) were enrolled in the study and asked to fill in an online questionnaire by answering questions regarding their use of social media during the COVID-19 lockdown. The frequency distributions and percentages were calculated using SPSS software. The results showed that 99% of the students used social media. The most frequently used type of social media was Facebook, 94%, followed by YouTube, 78%, and Instagram, 65%, while Twitter and Linkedin were used less, and no one used Blogger. About 63% of the students used elements of social media to l
... Show MoreToday with increase using social media, a lot of researchers have interested in topic extraction from Twitter. Twitter is an unstructured short text and messy that it is critical to find topics from tweets. While topic modeling algorithms such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) are originally designed to derive topics from large documents such as articles, and books. They are often less efficient when applied to short text content like Twitter. Luckily, Twitter has many features that represent the interaction between users. Tweets have rich user-generated hashtags as keywords. In this paper, we exploit the hashtags feature to improve topics learned
After the information revolution that occurred in the Western world, and the developments in all fields, especially in the field of education and e-learning, from an integrated system based on the effective employment of information and communication technology in the teaching and learning processes through an environment rich in computer and Internet applications, the community and the learner were able to access information sources and learning at any time and place, in a way that achieves mutual interaction between the elements of the system and the surrounding environment. After the occurrence of the phenomenon of Covid 19, it led to a major interruption in all educational systems that had never happened before, and the disrupt
... Show MoreThe financial markets are one of the sectors whose data is characterized by continuous movement in most of the times and it is constantly changing, so it is difficult to predict its trends , and this leads to the need of methods , means and techniques for making decisions, and that pushes investors and analysts in the financial markets to use various and different methods in order to reach at predicting the movement of the direction of the financial markets. In order to reach the goal of making decisions in different investments, where the algorithm of the support vector machine and the CART regression tree algorithm are used to classify the stock data in order to determine
... Show MoreIn this paper two main stages for image classification has been presented. Training stage consists of collecting images of interest, and apply BOVW on these images (features extraction and description using SIFT, and vocabulary generation), while testing stage classifies a new unlabeled image using nearest neighbor classification method for features descriptor. Supervised bag of visual words gives good result that are present clearly in the experimental part where unlabeled images are classified although small number of images are used in the training process.
Vehicular Ad Hoc Networks (VANETs) are integral to Intelligent Transportation Systems (ITS), enabling real-time communication between vehicles and infrastructure to enhance traffic flow, road safety, and passenger experience. However, the open and dynamic nature of VANETs presents significant privacy and security challenges, including data eavesdropping, message manipulation, and unauthorized access. This study addresses these concerns by leveraging advancements in Fog Computing (FC), which offers lowlatency, distributed data processing near-end devices to enhance the resilience and security of VANET communications. The paper comprehensively analyzes the security frameworks for fog-enabled VANETs, introducing a novel taxonomy that c
... Show MoreMaximizing the net present value (NPV) of oil field development is heavily dependent on optimizing well placement. The traditional approach entails the use of expert intuition to design well configurations and locations, followed by economic analysis and reservoir simulation to determine the most effective plan. However, this approach often proves inadequate due to the complexity and nonlinearity of reservoirs. In recent years, computational techniques have been developed to optimize well placement by defining decision variables (such as well coordinates), objective functions (such as NPV or cumulative oil production), and constraints. This paper presents a study on the use of genetic algorithms for well placement optimization, a ty
... Show More