A substantial portion of today’s multimedia data exists in the form of unstructured text. However, the unstructured nature of text poses a significant task in meeting users’ information requirements. Text classification (TC) has been extensively employed in text mining to facilitate multimedia data processing. However, accurately categorizing texts becomes challenging due to the increasing presence of non-informative features within the corpus. Several reviews on TC, encompassing various feature selection (FS) approaches to eliminate non-informative features, have been previously published. However, these reviews do not adequately cover the recently explored approaches to TC problem-solving utilizing FS, such as optimization techniques. This study comprehensively analyzes different FS approaches based on optimization algorithms for TC. We begin by introducing the primary phases involved in implementing TC. Subsequently, we explore a wide range of FS approaches for categorizing text documents and attempt to organize the existing works into four fundamental approaches: filter, wrapper, hybrid, and embedded. Furthermore, we review four optimization algorithms utilized in solving text FS problems: swarm intelligence-based, evolutionary-based, physics-based, and human behavior-related algorithms. We discuss the advantages and disadvantages of state-of-the-art studies that employ optimization algorithms for text FS methods. Additionally, we consider several aspects of each proposed method and thoroughly discuss the challenges associated with datasets, FS approaches, optimization algorithms, machine learning classifiers, and evaluation criteria employed to assess new and existing techniques. Finally, by identifying research gaps and proposing future directions, our review provides valuable guidance to researchers in developing and situating further studies within the current body of literature.
Autonomous motion planning is important area of robotics research. This type of planning relieves human operator from tedious job of motion planning. This reduces the possibility of human error and increase efficiency of whole process.
This research presents a new algorithm to plan path for autonomous mobile robot based on image processing techniques by using wireless camera that provides the desired image for the unknown environment . The proposed algorithm is applied on this image to obtain a optimal path for the robot. It is based on the observation and analysis of the obstacles that lying in the straight path between the start and the goal point by detecting these obstacles, analyzing and studying their shapes, positions and
... Show MoreThe principal goal guiding any designed encryption algorithm must be security against unauthorized attackers. Within the last decade, there has been a vast increase in the communication of digital computer data in both the private and public sectors. Much of this information has a significant value; therefore it does require the protection by design strength algorithm to cipher it. This algorithm defines the mathematical steps required to transform data into a cryptographic cipher and also to transform the cipher back to the original form. The Performance and security level is the main characteristics that differentiate one encryption algorithm from another. In this paper suggested a new technique to enhance the performance of the Data E
... Show MoreRecognizing speech emotions is an important subject in pattern recognition. This work is about studying the effect of extracting the minimum possible number of features on the speech emotion recognition (SER) system. In this paper, three experiments performed to reach the best way that gives good accuracy. The first one extracting only three features: zero crossing rate (ZCR), mean, and standard deviation (SD) from emotional speech samples, the second one extracting only the first 12 Mel frequency cepstral coefficient (MFCC) features, and the last experiment applying feature fusion between the mentioned features. In all experiments, the features are classified using five types of classification techniques, which are the Random Forest (RF),
... Show MoreEarth cover of the city of Baghdad was studied exclusively within its administrative border during the period 1986-2019 using satellite scenes every five years, as Landsat TM5 and OLI8 satellite images were used. The land has been classified into ten subclasses according to the characteristics of the land cover and was classified using the Maximum Likelihood classifier. A study of the changing urban reality of the city of Baghdad during that period and the change of vegetation due to environmental factors, human influences and some human phenomena that affected the accuracy of the classification for some areas east of the city of Baghdad is presented. The year 2019 has been highlighted because of its privacy in changing the land cover of th
... Show MoreRecently The problem of desertification and vegetation cover degradation become an environmental global challenge. This problem could be summarized as as the land cover changes. In this paper, the area of Al- Muthana in the south of Iraq will be consider as one of Semi-arid lands. For this purpose, the Ladsat-8 images can be used with 15 m in spatial resolution. In order to over Achieve the work, many important ground truth data must be collected such as, rain precipitation, temperature distribution over the seasons, the DEM of the region, and the soil texture characteristics. The extracted data from this project are tables, 2-D figures, and GIS maps represent the distributions of vegetation area
... Show MoreIn the current research work, a method to reduce the color levels of the pixels within digital images was proposed. The recent strategy was based on self organization map neural network method (SOM). The efficiency of recent method was compared with the well known logarithmic methods like Floyd-Steinberg (Halftone) dithering and Octtrees (Quadtrees) methods. Experimental results have shown that by adjusting the sampling factor can produce higher-quality images with no much longer run times, or some better quality with shorter running times than existing methods. This observation refutes the repeated neural networks is necessarily slow but have best results. The generated quantization map can be exploited for color image compression, clas
... Show MoreIn this study water quality index (WQI) was calculated to classify the flowing water in the Tigris River in Baghdad city. GIS was used to develop colored water quality maps indicating the classification of the river for drinking water purposes. Water quality parameters including: Turbidity, pH, Alkalinity, Total hardness, Calcium, Magnesium, Iron, Chloride, Sulfate, Nitrite, Nitrate, Ammonia, Orthophosphate and Total dissolved solids were used for WQI determination. These parameters were recorded at the intakes of the WTPs in Baghdad for the period 2004 to 2011. The results from the annual average WQI analysis classified the Tigris River very poor to polluted at the north of Baghdad (Alkarkh WTP) while it was very poor to very polluted in t
... Show MoreOften phenomena suffer from disturbances in their data as well as the difficulty of formulation, especially with a lack of clarity in the response, or the large number of essential differences plaguing the experimental units that have been taking this data from them. Thus emerged the need to include an estimation method implicit rating of these experimental units using the method of discrimination or create blocks for each item of these experimental units in the hope of controlling their responses and make it more homogeneous. Because of the development in the field of computers and taking the principle of the integration of sciences it has been found that modern algorithms used in the field of Computer Science genetic algorithm or ant colo
... Show MoreThis paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one
... Show MoreRecently The problem of desertification and vegetation cover degradation become an environmental global challenge. This problem could be summarized as as the land cover changes. In this paper, the area of Al- Muthana in the south of Iraq will be consider as one of Semi-arid lands. For this purpose, the Ladsat-8 images can be used with 15 m in spatial resolution. In order to over Achieve the work, many important ground truth data must be collected such as, rain precipitation, temperature distribution over the seasons, the DEM of the region, and the soil texture characteristics. The extracted data from this project are tables, 2-D figures, and GIS maps represent the distributions of vegetation areas, evaporation / precipitation, river levels
... Show More