Text categorization refers to the process of grouping text or documents into classes or categories according to their content. Text categorization process consists of three phases which are: preprocessing, feature extraction and classification. In comparison to the English language, just few studies have been done to categorize and classify the Arabic language. For a variety of applications, such as text classification and clustering, Arabic text representation is a difficult task because Arabic language is noted for its richness, diversity, and complicated morphology. This paper presents a comprehensive analysis and a comparison for researchers in the last five years based on the dataset, year, algorithms and the accuracy they got. Deep Learning (DL) and Machine Learning (ML) models were used to enhance text classification for Arabic language. Remarks for future work were concluded.
This paper presents a hierarchical two-stage outdoor scene classification method using multi-classes of Support Vector Machine (SVM). In this proposed method, the gist feature of all the images in the database is extracted first to obtain the feature vectors. The image of database is classified into eight outdoor scenes classes, four manmade scenes and four natural scenes. Second, a hierarchical classification is applied, where the first stage classifies all manmade scene classes against all natural scene classes, while the second stage of a hierarchical classification classifies the outputs of first stage into either one of the four manmade scene classes or natural scene classes. Binary SVM and multi-classes SVMs are employed in the fir
... Show MoreIn this paper, integrated quantum neural network (QNN), which is a class of feedforward
neural networks (FFNN’s), is performed through emerging quantum computing (QC) with artificial neural network(ANN) classifier. It is used in data classification technique, and here iris flower data is used as a classification signals. For this purpose independent component analysis (ICA) is used as a feature extraction technique after normalization of these signals, the architecture of (QNN’s) has inherently built in fuzzy, hidden units of these networks (QNN’s) to develop quantized representations of sample information provided by the training data set in various graded levels of certainty. Experimental results presented here show that
... Show MoreThe technological development in the field of information and communication has been accompanied by the emergence of security challenges related to the transmission of information. Encryption is a good solution. An encryption process is one of the traditional methods to protect the plain text, by converting it into inarticulate form. Encryption implemented can be occurred by using some substitute techniques, shifting techniques, or mathematical operations. This paper proposed a method with two branches to encrypt text. The first branch is a new mathematical model to create and exchange keys, the proposed key exchange method is the development of Diffie-Hellman. It is a new mathematical operations model to exchange keys based on prime num
... Show MoreExchange of information through the channels of communication can be unsafe. Communication media are not safe to send sensitive information so it is necessary to provide the protection of information from disclosure to unauthorized persons. This research presented the method to information security is done through information hiding into the cover image using a least significant bit (LSB) technique, where a text file is encrypted using a secret sharing scheme. Then, generating positions to hiding information in a random manner of cover image, which is difficult to predict hiding in the image-by-image analysis or statistical analyzes. Where it provides two levels of information security through encryption of a text file using the secret sha
... Show MoreIn the image processing’s field and computer vision it’s important to represent the image by its information. Image information comes from the image’s features that extracted from it using feature detection/extraction techniques and features description. Features in computer vision define informative data. For human eye its perfect to extract information from raw image, but computer cannot recognize image information. This is why various feature extraction techniques have been presented and progressed rapidly. This paper presents a general overview of the feature extraction categories for image.
Image segmentation is a basic image processing technique that is primarily used for finding segments that form the entire image. These segments can be then utilized in discriminative feature extraction, image retrieval, and pattern recognition. Clustering and region growing techniques are the commonly used image segmentation methods. K-Means is a heavily used clustering technique due to its simplicity and low computational cost. However, K-Means results depend on the initial centres’ values which are selected randomly, which leads to inconsistency in the image segmentation results. In addition, the quality of the isolated regions depends on the homogeneity of the resulted segments. In this paper, an improved K-Means
... Show MoreAbstract: Word sense disambiguation (WSD) is a significant field in computational linguistics as it is indispensable for many language understanding applications. Automatic processing of documents is made difficult because of the fact that many of the terms it contain ambiguous. Word Sense Disambiguation (WSD) systems try to solve these ambiguities and find the correct meaning. Genetic algorithms can be active to resolve this problem since they have been effectively applied for many optimization problems. In this paper, genetic algorithms proposed to solve the word sense disambiguation problem that can automatically select the intended meaning of a word in context without any additional resource. The proposed algorithm is evaluated on a col
... Show More