Feature selection (FS) constitutes a series of processes used to decide which relevant features/attributes to include and which irrelevant features to exclude for predictive modeling. It is a crucial task that aids machine learning classifiers in reducing error rates, computation time, overfitting, and improving classification accuracy. It has demonstrated its efficacy in myriads of domains, ranging from its use for text classification (TC), text mining, and image recognition. While there are many traditional FS methods, recent research efforts have been devoted to applying metaheuristic algorithms as FS techniques for the TC task. However, there are few literature reviews concerning TC. Therefore, a comprehensive overview was systematically studied by exploring available studies of different metaheuristic algorithms used for FS to improve TC. This paper will contribute to the body of existing knowledge by answering four research questions (RQs): 1) What are the different approaches of FS that apply metaheuristic algorithms to improve TC? 2) Does applying metaheuristic algorithms for TC lead to better accuracy than the typical FS methods? 3) How effective are the modified, hybridized metaheuristic algorithms for text FS problems?, and 4) What are the gaps in the current studies and their future directions? These RQs led to a study of recent works on metaheuristic-based FS methods, their contributions, and limitations. Hence, a final list of thirty-seven (37) related articles was extracted and investigated to align with our RQs to generate new knowledge in the domain of study. Most of the conducted papers focused on addressing the TC in tandem with metaheuristic algorithms based on the wrapper and hybrid FS approaches. Future research should focus on using a hybrid-based FS approach as it intuitively handles complex optimization problems and potentiality provide new research opportunities in this rapidly developing field.
The Braille Recognition System is the process of capturing a Braille document image and turning its content into its equivalent natural language characters. The Braille Recognition System's cell transcription and Braille cell recognition are the two basic phases that follow one another. The Braille Recognition System is a technique for locating and recognizing a Braille document stored as an image, such as a jpeg, jpg, tiff, or gif image, and converting the text into a machine-readable format, such as a text file. BCR translates an image's pixel representation into its character representation. As workers at visually impaired schools and institutes, we profit from Braille recognition in a variety of ways. The Braille Recognition S
... Show MoreRoller compacted concrete (RCC) is a material with no slumps and is made from the same raw materials as conventional concrete. The roller compacted dam method, the high paste technique, the corps of engineers method, and the maximum density method are all ways of designing RCC. The evolution of RCC has resulted in a substantial change in construction projects, most notably in dams, because of the sluggish pace of conventional placement, consolidation, and compacting. The construction process was accelerated by incorporating RCC into dams, resulting in a shorter construction period. Research shows that the dams that used RCC had completed one to two years sooner than the dams that used regular concrete (Bagheri an
... Show MoreThe research studied and analyzed the hybrid parallel-series systems of asymmetrical components by applying different experiments of simulations used to estimate the reliability function of those systems through the use of the maximum likelihood method as well as the Bayes standard method via both symmetrical and asymmetrical loss functions following Rayleigh distribution and Informative Prior distribution. The simulation experiments included different sizes of samples and default parameters which were then compared with one another depending on Square Error averages. Following that was the application of Bayes standard method by the Entropy Loss function that proved successful throughout the experimental side in finding the reliability fun
... Show MoreBackground/Objectives: The purpose of this study was to classify Alzheimer’s disease (AD) patients from Normal Control (NC) patients using Magnetic Resonance Imaging (MRI). Methods/Statistical analysis: The performance evolution is carried out for 346 MR images from Alzheimer's Neuroimaging Initiative (ADNI) dataset. The classifier Deep Belief Network (DBN) is used for the function of classification. The network is trained using a sample training set, and the weights produced are then used to check the system's recognition capability. Findings: As a result, this paper presented a novel method of automated classification system for AD determination. The suggested method offers good performance of the experiments carried out show that the
... Show MoreOne of the most important features of the Amazon Web Services (AWS) cloud is that the program can be run and accessed from any location. You can access and monitor the result of the program from any location, saving many images and allowing for faster computation. This work proposes a face detection classification model based on AWS cloud aiming to classify the faces into two classes: a non-permission class, and a permission class, by training the real data set collected from our cameras. The proposed Convolutional Neural Network (CNN) cloud-based system was used to share computational resources for Artificial Neural Networks (ANN) to reduce redundant computation. The test system uses Internet of Things (IoT) services th
... Show MoreOne of the most important features of the Amazon Web Services (AWS) cloud is that the program can be run and accessed from any location. You can access and monitor the result of the program from any location, saving many images and allowing for faster computation. This work proposes a face detection classification model based on AWS cloud aiming to classify the faces into two classes: a non-permission class, and a permission class, by training the real data set collected from our cameras. The proposed Convolutional Neural Network (CNN) cloud-based system was used to share computational resources for Artificial Neural Networks (ANN) to reduce redundant computation. The test system uses Internet of Things (IoT) services through our ca
... Show MorePlane cubics curves may be classified up to isomorphism or projective equivalence. In this paper, the inequivalent elliptic cubic curves which are non-singular plane cubic curves have been classified projectively over the finite field of order nineteen, and determined if they are complete or incomplete as arcs of degree three. Also, the maximum size of a complete elliptic curve that can be constructed from each incomplete elliptic curve are given.
Whenever, the Internet of Things (IoT) applications and devices increased, the capability of the its access frequently stressed. That can lead a significant bottleneck problem for network performance in different layers of an end point to end point (P2P) communication route. So, an appropriate characteristic (i.e., classification) of the time changing traffic prediction has been used to solve this issue. Nevertheless, stills remain at great an open defy. Due to of the most of the presenting solutions depend on machine learning (ML) methods, that though give high calculation cost, where they are not taking into account the fine-accurately flow classification of the IoT devices is needed. Therefore, this paper presents a new model bas
... Show MoreAbstract
The problem of missing data represents a major obstacle before researchers in the process of data analysis in different fields since , this problem is a recurrent one in all fields of study including social , medical , astronomical and clinical experiments .
The presence of such a problem within the data to be studied may influence negatively on the analysis and it may lead to misleading conclusions , together with the fact that these conclusions that result from a great bias caused by that problem in spite of the efficiency of wavelet methods but they are also affected by the missing of data , in addition to the impact of the problem of miss of accuracy estimation
... Show More