Text categorization refers to the process of grouping text or documents into classes or categories according to their content. Text categorization process consists of three phases which are: preprocessing, feature extraction and classification. In comparison to the English language, just few studies have been done to categorize and classify the Arabic language. For a variety of applications, such as text classification and clustering, Arabic text representation is a difficult task because Arabic language is noted for its richness, diversity, and complicated morphology. This paper presents a comprehensive analysis and a comparison for researchers in the last five years based on the dataset, year, algorithms and the accuracy they got. Deep Learning (DL) and Machine Learning (ML) models were used to enhance text classification for Arabic language. Remarks for future work were concluded.
Channel estimation (CE) is essential for wireless links but becomes progressively onerous as Fifth Generation (5G) Multi-Input Multi-Output (MIMO) systems and extensive fading expand the search space and increase latency. This study redefines CE support as the process of learning to deduce channel type and signal-tonoise ratio (SNR) directly from per-tone Orthogonal Frequency-Division Multiplexing (OFDM) observations,with blind channel state information (CSI). We trained a dual deep model that combined Convolutional Neural Networks (CNNs) with Bidirectional Recurrent Neural Networks (BRNNs). We used a lookup table (LUT) label for channel type (class indices instead of per-tap values) and ordinal supervision for SNR (0–20 dB,5-dB steps). T
... Show MoreWith the fast-growing of neural machine translation (NMT), there is still a lack of insight into the performance of these models on semantically and culturally rich texts, especially between linguistically distant languages like Arabic and English. In this paper, we investigate the performance of two state-of-the-art AI translation systems (ChatGPT, DeepSeek) when translating Arabic texts to English in three different genres: journalistic, literary, and technical. The study utilizes a mixed-method evaluation methodology based on a balanced corpus of 60 Arabic source texts from the three genres. Objective measures, including BLEU and TER, and subjective evaluations from human translators were employed to determine the semantic, contextual an
... Show MoreField of translation is sampled with many types of translation, such as the literary, scientific, medical, etc. The translation of grammatical aspects has always been with difficulties.
Political translation is the focus here. There are many general problems faced by translators when translating political texts from Arabic into Spanish. The aim here is to clarify the definition of functions or terms within the text, and to arrive at the correct from of translation of such texts from Spanish into Arabic. It is worth mentioning that the paper is of two parts: the first exemplifies what is meant by translation, the prerequisites of a translator, along with mentioning the methods followed&nbs
... Show MoreThe intelligent buildings provided various incentives to get highly inefficient energy-saving caused by the non-stationary building environments. In the presence of such dynamic excitation with higher levels of nonlinearity and coupling effect of temperature and humidity, the HVAC system transitions from underdamped to overdamped indoor conditions. This led to the promotion of highly inefficient energy use and fluctuating indoor thermal comfort. To address these concerns, this study develops a novel framework based on deep clustering of lagrangian trajectories for multi-task learning (DCLTML) and adding a pre-cooling coil in the air handling unit (AHU) to alleviate a coupling issue. The proposed DCLTML exhibits great overall control and is
... Show MoreDuring the two last decades ago, audio compression becomes the topic of many types of research due to the importance of this field which reflecting on the storage capacity and the transmission requirement. The rapid development of the computer industry increases the demand for audio data with high quality and accordingly, there is great importance for the development of audio compression technologies, lossy and lossless are the two categories of compression. This paper aims to review the techniques of the lossy audio compression methods, summarize the importance and the uses of each method.
Vascular patterns were seen to be a probable identification characteristic of the biometric system. Since then, many studies have investigated and proposed different techniques which exploited this feature and used it for the identification and verification purposes. The conventional biometric features like the iris, fingerprints and face recognition have been thoroughly investigated, however, during the past few years, finger vein patterns have been recognized as a reliable biometric feature. This study discusses the application of the vein biometric system. Though the vein pattern can be a very appealing topic of research, there are many challenges in this field and some improvements need to be carried out. Here, the researchers reviewed
... Show MoreIn this paper we investigate the automatic recognition of emotion in text. We propose a new method for emotion recognition based on the PPM (PPM is short for Prediction by Partial Matching) character-based text compression scheme in order to recognize Ekman’s six basic emotions (Anger, Disgust, Fear, Happiness, Sadness, Surprise). Experimental results with three datasets show that the new method is very effective when compared with traditional word-based text classification methods. We have also found that our method works best if the sizes of text in all classes used for training are similar, and that performance significantly improves with increased data.
Face recognition is a crucial biometric technology used in various security and identification applications. Ensuring accuracy and reliability in facial recognition systems requires robust feature extraction and secure processing methods. This study presents an accurate facial recognition model using a feature extraction approach within a cloud environment. First, the facial images undergo preprocessing, including grayscale conversion, histogram equalization, Viola-Jones face detection, and resizing. Then, features are extracted using a hybrid approach that combines Linear Discriminant Analysis (LDA) and Gray-Level Co-occurrence Matrix (GLCM). The extracted features are encrypted using the Data Encryption Standard (DES) for security
... Show More