Text categorization refers to the process of grouping text or documents into classes or categories according to their content. Text categorization process consists of three phases which are: preprocessing, feature extraction and classification. In comparison to the English language, just few studies have been done to categorize and classify the Arabic language. For a variety of applications, such as text classification and clustering, Arabic text representation is a difficult task because Arabic language is noted for its richness, diversity, and complicated morphology. This paper presents a comprehensive analysis and a comparison for researchers in the last five years based on the dataset, year, algorithms and the accuracy they got. Deep Learning (DL) and Machine Learning (ML) models were used to enhance text classification for Arabic language. Remarks for future work were concluded.
Uncompressed form of the digital images are needed a very large storage capacity amount, as a consequence requires large communication bandwidth for data transmission over the network. Image compression techniques not only minimize the image storage space but also preserve the quality of image. This paper reveal image compression technique which uses distinct image coding scheme based on wavelet transform that combined effective types of compression algorithms for further compression. EZW and SPIHT algorithms are types of significant compression techniques that obtainable for lossy image compression algorithms. The EZW coding is a worthwhile and simple efficient algorithm. SPIHT is an most powerful technique that utilize for image
... Show More<span>One of the main difficulties facing the certified documents documentary archiving system is checking the stamps system, but, that stamps may be contains complex background and surrounded by unwanted data. Therefore, the main objective of this paper is to isolate background and to remove noise that may be surrounded stamp. Our proposed method comprises of four phases, firstly, we apply k-means algorithm for clustering stamp image into a number of clusters and merged them using ISODATA algorithm. Secondly, we compute mean and standard deviation for each remaining cluster to isolate background cluster from stamp cluster. Thirdly, a region growing algorithm is applied to segment the image and then choosing the connected regi
... Show MoreA set of hydro treating experiments are carried out on vacuum gas oil in a trickle bed reactor to study the hydrodesulfurization and hydrodenitrogenation based on two model compounds, carbazole (non-basic nitrogen compound) and acridine (basic nitrogen compound), which are added at 0–200 ppm to the tested oil, and dibenzotiophene is used as a sulfur model compound at 3,000 ppm over commercial CoMo/ Al2O3 and prepared PtMo/Al2O3. The impregnation method is used to prepare (0.5% Pt) PtMo/Al2O3. The basic sites are found to be very small, and the two catalysts exhibit good metal support interaction. In the absence of nitrogen compounds over the tested catalysts in the trickle bed reactor at temperatures of 523 to 573 K, liquid hourly space v
... Show MoreMany authors investigated the problem of the early visibility of the new crescent moon after the conjunction and proposed many criteria addressing this issue in the literature. This article presented a proposed criterion for early crescent moon sighting based on a deep-learned pattern recognizer artificial neural network (ANN) performance. Moon sight datasets were collected from various sources and used to learn the ANN. The new criterion relied on the crescent width and the arc of vision from the edge of the crescent bright limb. The result of that criterion was a control value indicating the moon's visibility condition, which separated the datasets into four regions: invisible, telescope only, probably visible, and certai
... Show MoreThe focus of this paper is the presentation of a new type of mapping called projection Jungck zn- Suzuki generalized and also defining new algorithms of various types (one-step and two-step algorithms) (projection Jungck-normal N algorithm, projection Jungck-Picard algorithm, projection Jungck-Krasnoselskii algorithm, and projection Jungck-Thianwan algorithm). The convergence of these algorithms has been studied, and it was discovered that they all converge to a fixed point. Furthermore, using the previous three conditions for the lemma, we demonstrated that the difference between any two sequences is zero. These algorithms' stability was demonstrated using projection Jungck Suzuki generalized mapping. In contrast, the rate of convergenc
... Show MoreAnomaly detection is still a difficult task. To address this problem, we propose to strengthen DBSCAN algorithm for the data by converting all data to the graph concept frame (CFG). As is well known that the work DBSCAN method used to compile the data set belong to the same species in a while it will be considered in the external behavior of the cluster as a noise or anomalies. It can detect anomalies by DBSCAN algorithm can detect abnormal points that are far from certain set threshold (extremism). However, the abnormalities are not those cases, abnormal and unusual or far from a specific group, There is a type of data that is do not happen repeatedly, but are considered abnormal for the group of known. The analysis showed DBSCAN using the
... Show MoreDeaf and dumb peoples are suffering difficulties most of the time in communicating with society. They use sign language to communicate with each other and with normal people. But Normal people find it more difficult to understand the sign language and gestures made by deaf and dumb people. Therefore, many techniques have been employed to tackle this problem by converting the sign language to a text or a voice and vice versa. In recent years, research has progressed steadily in regard to the use of computers to recognize and translate the sign language. This paper reviews significant projects in the field beginning with important steps of sign language translation. These projects can b
Nowadays, huge digital images are used and transferred via the Internet. It has been the primary source of information in several domains in recent years. Blur image is one of the most common difficult challenges in image processing, which is caused via object movement or a camera shake. De-blurring is the main process to restore the sharp original image, so many techniques have been proposed, and a large number of research papers have been published to remove blurring from the image. This paper presented a review for the recent papers related to de-blurring published in the recent years (2017-2020). This paper focused on discussing various strategies related to enhancing the software's for image de-blur.&n
... Show MoreWith the recent developments of technology and the advances in artificial intelligent and machine learning techniques, it becomes possible for the robot to acquire and show the emotions as a part of Human-Robot Interaction (HRI). An emotional robot can recognize the emotional states of humans so that it will be able to interact more naturally with its human counterpart in different environments. In this article, a survey on emotion recognition for HRI systems has been presented. The survey aims to achieve two objectives. Firstly, it aims to discuss the main challenges that face researchers when building emotional HRI systems. Secondly, it seeks to identify sensing channels that can be used to detect emotions and provides a literature review
... Show More