A substantial portion of today’s multimedia data exists in the form of unstructured text. However, the unstructured nature of text poses a significant task in meeting users’ information requirements. Text classification (TC) has been extensively employed in text mining to facilitate multimedia data processing. However, accurately categorizing texts becomes challenging due to the increasing presence of non-informative features within the corpus. Several reviews on TC, encompassing various feature selection (FS) approaches to eliminate non-informative features, have been previously published. However, these reviews do not adequately cover the recently explored approaches to TC problem-solving utilizing FS, such as optimization techniques. This study comprehensively analyzes different FS approaches based on optimization algorithms for TC. We begin by introducing the primary phases involved in implementing TC. Subsequently, we explore a wide range of FS approaches for categorizing text documents and attempt to organize the existing works into four fundamental approaches: filter, wrapper, hybrid, and embedded. Furthermore, we review four optimization algorithms utilized in solving text FS problems: swarm intelligence-based, evolutionary-based, physics-based, and human behavior-related algorithms. We discuss the advantages and disadvantages of state-of-the-art studies that employ optimization algorithms for text FS methods. Additionally, we consider several aspects of each proposed method and thoroughly discuss the challenges associated with datasets, FS approaches, optimization algorithms, machine learning classifiers, and evaluation criteria employed to assess new and existing techniques. Finally, by identifying research gaps and proposing future directions, our review provides valuable guidance to researchers in developing and situating further studies within the current body of literature.
Today in the digital realm, where images constitute the massive resource of the social media base but unfortunately suffer from two issues of size and transmission, compression is the ideal solution. Pixel base techniques are one of the modern spatially optimized modeling techniques of deterministic and probabilistic bases that imply mean, index, and residual. This paper introduces adaptive pixel-based coding techniques for the probabilistic part of a lossy scheme by incorporating the MMSA of the C321 base along with the utilization of the deterministic part losslessly. The tested results achieved higher size reduction performance compared to the traditional pixel-based techniques and the standard JPEG by about 40% and 50%,
... Show MoreThe Financial systems can be classified into two types. The first is the market–oriented, which is applied in United States and United Kingdom. While the second is bank-oriented as in Japan and Germany.
This study tries to explain the reasons which make some countries adopt the first one instead of the second, and the contrary. So the study consists of three sections. The first deals with the concept of financial system and it are functions. The second displays the indicators which are used to classify the financial systems, while the third one is devoted to the factors that determine the type of financial system .These sections followed by some conclusions.
Regression testing being expensive, requires optimization notion. Typically, the optimization of test cases results in selecting a reduced set or subset of test cases or prioritizing the test cases to detect potential faults at an earlier phase. Many former studies revealed the heuristic-dependent mechanism to attain optimality while reducing or prioritizing test cases. Nevertheless, those studies were deprived of systematic procedures to manage tied test cases issue. Moreover, evolutionary algorithms such as the genetic process often help in depleting test cases, together with a concurrent decrease in computational runtime. However, when examining the fault detection capacity along with other parameters, is required, the method falls sh
... Show MoreSignificant advances in the automated glaucoma detection techniques have been made through the employment of the Machine Learning (ML) and Deep Learning (DL) methods, an overview of which will be provided in this paper. What sets the current literature review apart is its exclusive focus on the aforementioned techniques for glaucoma detection using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines for filtering the selected papers. To achieve this, an advanced search was conducted in the Scopus database, specifically looking for research papers published in 2023, with the keywords "glaucoma detection", "machine learning", and "deep learning". Among the multiple found papers, the ones focusing
... Show MoreThis study aims to employ modern spatial simulation models to predict the future growth of Al-Najaf city for the year 2036 by studying the change in land use for the time period (1986-2016) because of its importance in shaping future policy for the planning process and decision-making process and ensuring a sustainable urban future, using Geographical information software programs and remote sensing (GIS, IDRISI Selva) as they are appropriate tools for exploring spatial temporal changes from the local level to the global scale. The application of the Markov chain model, which is a popular model that calculates the probability of future change based on the past, and the Cellular Automa
In the current worldwide health crisis produced by coronavirus disease (COVID-19), researchers and medical specialists began looking for new ways to tackle the epidemic. According to recent studies, Machine Learning (ML) has been effectively deployed in the health sector. Medical imaging sources (radiography and computed tomography) have aided in the development of artificial intelligence(AI) strategies to tackle the coronavirus outbreak. As a result, a classical machine learning approach for coronavirus detection from Computerized Tomography (CT) images was developed. In this study, the convolutional neural network (CNN) model for feature extraction and support vector machine (SVM) for the classification of axial
... Show MoreInterest in belowground plant growth is increasing, especially in relation to arguments that shallow‐rooted cultivars are efficient at exploiting soil phosphorus while deep‐rooted ones will access water at depth. However, methods for assessing roots in large numbers of plants are diverse and direct comparisons of methods are rare. Three methods for measuring root growth traits were evaluated for utility in discriminating rice cultivars: soil‐filled rhizotrons, hydroponics and soil‐filled pots whose bottom was sealed with a non‐woven fabric (a potential method for assessing root penetration ability). A set of 38 rice genotypes including the Oryza
In the recent years the research on the activated carbon preparation from agro-waste and byproducts have been increased due to their potency for agro-waste elimination. This paper presents a literature review on the synthesis of activated carbon from agro-waste using microwave irradiation method for heating. The applicable approach is highlighted, as well as the effects of activation conditions including carbonization temperature, retention period, and impregnation ratio. The review reveals that the agricultural wastes heated using a chemical process and microwave energy can produce activated carbon with a surface area that is significantly higher than that using the conventional heating method.