A substantial portion of today’s multimedia data exists in the form of unstructured text. However, the unstructured nature of text poses a significant task in meeting users’ information requirements. Text classification (TC) has been extensively employed in text mining to facilitate multimedia data processing. However, accurately categorizing texts becomes challenging due to the increasing presence of non-informative features within the corpus. Several reviews on TC, encompassing various feature selection (FS) approaches to eliminate non-informative features, have been previously published. However, these reviews do not adequately cover the recently explored approaches to TC problem-solving utilizing FS, such as optimization techniques. This study comprehensively analyzes different FS approaches based on optimization algorithms for TC. We begin by introducing the primary phases involved in implementing TC. Subsequently, we explore a wide range of FS approaches for categorizing text documents and attempt to organize the existing works into four fundamental approaches: filter, wrapper, hybrid, and embedded. Furthermore, we review four optimization algorithms utilized in solving text FS problems: swarm intelligence-based, evolutionary-based, physics-based, and human behavior-related algorithms. We discuss the advantages and disadvantages of state-of-the-art studies that employ optimization algorithms for text FS methods. Additionally, we consider several aspects of each proposed method and thoroughly discuss the challenges associated with datasets, FS approaches, optimization algorithms, machine learning classifiers, and evaluation criteria employed to assess new and existing techniques. Finally, by identifying research gaps and proposing future directions, our review provides valuable guidance to researchers in developing and situating further studies within the current body of literature.
In this paper, we used four classification methods to classify objects and compareamong these methods, these are K Nearest Neighbor's (KNN), Stochastic Gradient Descentlearning (SGD), Logistic Regression Algorithm(LR), and Multi-Layer Perceptron (MLP). Weused MCOCO dataset for classification and detection the objects, these dataset image wererandomly divided into training and testing datasets at a ratio of 7:3, respectively. In randomlyselect training and testing dataset images, converted the color images to the gray level, thenenhancement these gray images using the histogram equalization method, resize (20 x 20) fordataset image. Principal component analysis (PCA) was used for feature extraction, andfinally apply four classification metho
... Show MoreNAA Mustafa, University of Sulaimani, Ms. c Thesis, 2010 - Cited by 4
Deconstruction theory is a theory that appeared After construction theory, and it tends, through some key principles, to reach the purposive and the main meaning of the text by the means of different perspectives. In other words, deconstruction is a critical literary theory and a contemporary philosophical approach that work together to reach exact concept of the text, and this is achieved through reading and analyzing the text. Therefore, deconstruction has specified some principles so as to reach the exact meaning of the text through these different principles.
پێشەكی:
تیۆری هەڵوەشاندنەوە تیۆرێكە پاش بوونیادگەری سەریهەڵداوە و دەیەوێت لەڕ
... Show MoreA comprehensive review focuses on 3D network-on-chip (NoC) simulators and plugins while paying attention to the 2D simulators as the baseline is presented. Discussions include the programming languages, installation configuration, platforms and operating systems for the respective simulators. In addition, the simulator’s properties and plugins for design metrics evaluations are addressed. This review is intended for the early career researchers starting in 3D NoC, offering selection guidelines on the right tools for the targeted NoC architecture, design, and requirements.
In this paper, a new method of selection variables is presented to select some essential variables from large datasets. The new model is a modified version of the Elastic Net model. The modified Elastic Net variable selection model has been summarized in an algorithm. It is applied for Leukemia dataset that has 3051 variables (genes) and 72 samples. In reality, working with this kind of dataset is not accessible due to its large size. The modified model is compared to some standard variable selection methods. Perfect classification is achieved by applying the modified Elastic Net model because it has the best performance. All the calculations that have been done for this paper are in
Abstract:
Al-Marba'aniyah, which is a long cold wave, was defined by ancient
Iraqis. It represents the coldest days in Iraq. In this research paper, a new
scale was put to define it. It shows that the period between the minimum
temperature degree recoded in December and the minimum temperature
degree recorded in January is considered to be the period of Al-Marba'aniyah.
The research concluded that Al-Marba'aniyah is unsteady and it changes in
the days of its occurrence. It was also concluded that the dates of the
beginning and the end of Al-Marba'aniyah are unsteady, too. Moreover, it was
found out that each of the Siberian high, European high, and finally the
subtropical high are the responsible systems for
In recent years, the field of research around the congestion problem of 4G and 5G networks has grown, especially those based on artificial intelligence (AI). Although 4G with LTE is seen as a mature technology, there is a continuous improvement in the infrastructure that led to the emergence of 5G networks. As a result of the large services provided in industries, Internet of Things (IoT) applications and smart cities, which have a large amount of exchanged data, a large number of connected devices per area, and high data rates, have brought their own problems and challenges, especially the problem of congestion. In this context, artificial intelligence (AI) models can be considered as one of the main techniques that can be used to solve ne
... Show MoreThe research (Anthropology and Representations of magic in Arab Theatrical Text, Harut and Marut's play as a Model) is concerned with studying magic and the forms of its presence in the theatrical text in different human cultures where it belongs. The research consists of four chapters.
The first chapter includes the research problem that revolves around the following questions: (what is the mechanism of employing magic anthropology and its representations in the Arab theatrical text Harut and Marut's play as a model?), and the research importance which is attributed to the necessity of studying (magic) in the Arab theatrical text as it is considered the inauguration of one of the social phenomena that many researchers in the field o