Fuzzy C-means (FCM) is a clustering method used for collecting similar data elements within the group according to specific measurements. Tabu is a heuristic algorithm. In this paper, Probabilistic Tabu Search for FCM implemented to find a global clustering based on the minimum value of the Fuzzy objective function. The experiments designed for different networks, and cluster’s number the results show the best performance based on the comparison that is done between the values of the objective function in the case of using standard FCM and Tabu-FCM, for the average of ten runs.
Many undergraduate learners at English departments who study English as a foreign language are unable to speak and use language correctly in their post -graduate careers. This problem can be attributed to certain difficulties, which they faced throughout their education years that hinder their endeavors to learn. Therefore, this study aims to discover the main difficulties faced by EFL students in language learning and test the difficulty variable according to gender and college variables then find suitable solutions for enhancing learning. A questionnaire with 15 items and 5 scales were used to help in discovering the difficulties. The questionnaire was distributed to the selected sample of study which consists of 90 (male and female) stud
... Show MoreOil/water emulsions are one of the major threats to environment nowadays, occurs at many stages in the production and treatment of crude oil. The oil recovery process adopted will depend on how the oil is present in the water stream. Oil can be found as free oil, as an unstable oil/water emulsion and also as a highly stable oil/water emulsion. The current study was dedicated to the application of microbubble air flotation process for the removal of such oily emulsions for its characters of cost-effective, simple structure, high efficiency and no secondary pollution. The influence of several key parameters on the process removal efficiency was examined, namely, initial oil concentration, pH value of t
Shadow detection and removal is an important task when dealing with color outdoor images. Shadows are generated by a local and relative absence of light. Shadows are, first of all, a local decrease in the amount of light that reaches a surface. Secondly, they are a local change in the amount of light rejected by a surface toward the observer. Most shadow detection and segmentation methods are based on image analysis. However, some factors will affect the detection result due to the complexity of the circumstances. In this paper a method of segmentation test present to detect shadows from an image and a function concept is used to remove the shadow from an image.
Steganography is a useful technique that helps in securing data in communication using different data carriers like audio, video, image and text. The most popular type of steganography is image steganography. It mostly uses least significant bit (LSB) technique to hide the data but the probability of detecting the hidden data using this technique is high. RGB is a color model which uses LSB to hide the data in three color channels, where each pixel is represented by three bytes to indicate the intensity of red, green and blue in that pixel. In this paper, steganography based RGB image is proposed which depends on genetic algorithm (GA). GA is used to generate random key that represents the best ordering of secret (image/text) blocks to b
... Show MoreWater/oil emulsion is considered as the most refractory mixture to separate because of the interference of the two immiscible liquids, water and oil. This research presents a study of dewatering of water / kerosene emulsion using hydrocyclone. The effects of factors such as: feed flow rate (3, 5, 7, 9, and 11 L/min), inlet water concentration of the emulsion (5%, 7.5%, 10%, 12.5%, and 15% by volume), and split ratio (0.1, 0.3, 0.5, 0.7, and 0.9) on the separation efficiency and pressure drop were studied. Dimensional analysis using Pi theorem was applied for the first time to model the hydrocyclone based on the experimental data. It was shown that the maximum separation efficiency; at split ratio 0.1, was 94.3% at 10% co
... Show MoreCompressing the speech reduces the data storage requirements, leading to reducing the time of transmitting the digitized speech over long-haul links like internet. To obtain best performance in speech compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry.The MCT bases functions are derived from GHM bases function using 2D linear convolution .The fast computation algorithm methods introduced here added desirable features to the current transform. We further assess the performance of the MCT in speech compression application. This paper discusses the effect of using DWT and MCT (one and two dimension) on speech compression. DWT and MCT performances in terms of comp
... Show MoreEye Detection is used in many applications like pattern recognition, biometric, surveillance system and many other systems. In this paper, a new method is presented to detect and extract the overall shape of one eye from image depending on two principles Helmholtz & Gestalt. According to the principle of perception by Helmholz, any observed geometric shape is perceptually "meaningful" if its repetition number is very small in image with random distribution. To achieve this goal, Gestalt Principle states that humans see things either through grouping its similar elements or recognize patterns. In general, according to Gestalt Principle, humans see things through genera
... Show MoreNeural cryptography deals with the problem of “key exchange” between two neural networks by using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between two communicating parties ar eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process.
Diabetes is one of the increasing chronic diseases, affecting millions of people around the earth. Diabetes diagnosis, its prediction, proper cure, and management are compulsory. Machine learning-based prediction techniques for diabetes data analysis can help in the early detection and prediction of the disease and its consequences such as hypo/hyperglycemia. In this paper, we explored the diabetes dataset collected from the medical records of one thousand Iraqi patients. We applied three classifiers, the multilayer perceptron, the KNN and the Random Forest. We involved two experiments: the first experiment used all 12 features of the dataset. The Random Forest outperforms others with 98.8% accuracy. The second experiment used only five att
... Show MoreHyperglycemia is a complication of diabetes (high blood sugar). This condition causes biochemical alterations in the cells of the body, which may lead to structural and functional problems throughout the body, including the eye. Diabetes retinopathy (DR) is a type of retinal degeneration induced by long-term diabetes that may lead to blindness. propose our deep learning method for the early detection of retinopathy using an efficient net B1 model and using the APTOS 2019 dataset. we used the Gaussian filter as one of the most significant image-processing algorithms. It recognizes edges in the dataset and reduces superfluous noise. We will enlarge the retina picture to 224×224 (the Efficient Net B1 standard) and utilize data aug
... Show More