A substantial portion of today’s multimedia data exists in the form of unstructured text. However, the unstructured nature of text poses a significant task in meeting users’ information requirements. Text classification (TC) has been extensively employed in text mining to facilitate multimedia data processing. However, accurately categorizing texts becomes challenging due to the increasing presence of non-informative features within the corpus. Several reviews on TC, encompassing various feature selection (FS) approaches to eliminate non-informative features, have been previously published. However, these reviews do not adequately cover the recently explored approaches to TC problem-solving utilizing FS, such as optimization techniques. This study comprehensively analyzes different FS approaches based on optimization algorithms for TC. We begin by introducing the primary phases involved in implementing TC. Subsequently, we explore a wide range of FS approaches for categorizing text documents and attempt to organize the existing works into four fundamental approaches: filter, wrapper, hybrid, and embedded. Furthermore, we review four optimization algorithms utilized in solving text FS problems: swarm intelligence-based, evolutionary-based, physics-based, and human behavior-related algorithms. We discuss the advantages and disadvantages of state-of-the-art studies that employ optimization algorithms for text FS methods. Additionally, we consider several aspects of each proposed method and thoroughly discuss the challenges associated with datasets, FS approaches, optimization algorithms, machine learning classifiers, and evaluation criteria employed to assess new and existing techniques. Finally, by identifying research gaps and proposing future directions, our review provides valuable guidance to researchers in developing and situating further studies within the current body of literature.
Ketoprofen has recently been proven to offer therapeutic potential in preventing cancers such as colorectal and lung tumors, as well as in treating neurological illnesses. The goal of this review is to show the methods that have been used for determining ketoprofen in pharmaceutical formulations. Precision product quality control is crucial to confirm the composition of the drugs in pharmaceutical use. Several analytical techniques, including chromatographic and spectroscopic methods, have been used for determining ketoprofen in different sample forms such as a tablet, capsule, ampoule, gel, and human plasma. The limit of detection of ketoprofen was 0.1 ng/ ml using liquid chromatography with tandem mass spectrometry, while it was 0.01-
... Show MoreMost of the water pollutants with dyes are leftovers from industries, including textiles, wool and others. There are many ways to remove dyes such as sorption, oxidation, coagulation, filtration, and biodegradation, Chlorination, ozonation, chemical precipitation, adsorption, electrochemical processes, membrane approaches, and biological treatment are among the most widely used technologies for removing colors from wastewater. Dyes are divided into two types: natural dyes and synthetic dyes.
Self-repairing technology based on micro-capsules is an efficient solution for repairing cracked cementitious composites. Self-repairing based on microcapsules begins with the occurrence of cracks and develops by releasing self-repairing factors in the cracks located in concrete. Based on previous comprehensive studies, this paper provides an overview of various repairing factors and investigative methodologies. There has recently been a lack of consensus on the most efficient criteria for assessing self-repairing based on microcapsules and the smart solutions for improving capsule survival ratios during mixing. The most commonly utilized self-repairing efficiency assessment indicators are mechanical resistance and durab
... Show MoreBrainstorming has been a common approach in many industries where the result is not always accurate, especially when procuring automobile spare parts. This approach was replaced with a scientific and optimized method that is highly reliable, hence the decision to optimize the inventory inflation budget based on spare parts and miscellaneous costs of the typical automobile industry. Some factors required to achieve this goal were investigated. Through this investigation, spare parts (consumables and non-consumables) were found to be mostly used in Innoson Vehicle Manufacturing (IVM), Nigeria but incorporated miscellaneous costs to augment the cost of spare parts. The inflation rate was considered first due to the market's
... Show MoreThis paper presents a new algorithm in an important research field which is the semantic word similarity estimation. A new feature-based algorithm is proposed for measuring the word semantic similarity for the Arabic language. It is a highly systematic language where its words exhibit elegant and rigorous logic. The score of sematic similarity between two Arabic words is calculated as a function of their common and total taxonomical features. An Arabic knowledge source is employed for extracting the taxonomical features as a set of all concepts that subsumed the concepts containing the compared words. The previously developed Arabic word benchmark datasets are used for optimizing and evaluating the proposed algorithm. In this paper,
... Show MoreThe meniscus has a crucial function in human anatomy, and Magnetic Resonance Imaging (M.R.I.) plays an essential role in meniscus assessment. It is difficult to identify cartilage lesions using typical image processing approaches because the M.R.I. data is so diverse. An M.R.I. data sequence comprises numerous images, and the attributes area we are searching for may differ from each image in the series. Therefore, feature extraction gets more complicated, hence specifically, traditional image processing becomes very complex. In traditional image processing, a human tells a computer what should be there, but a deep learning (D.L.) algorithm extracts the features of what is already there automatically. The surface changes become valuable when
... Show MoreThis work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show More