The interests toward developing accurate automatic face emotion recognition methodologies are growing vastly, and it is still one of an ever growing research field in the region of computer vision, artificial intelligent and automation. However, there is a challenge to build an automated system which equals human ability to recognize facial emotion because of the lack of an effective facial feature descriptor and the difficulty of choosing proper classification method. In this paper, a geometric based feature vector has been proposed. For the classification purpose, three different types of classification methods are tested: statistical, artificial neural network (NN) and Support Vector Machine (SVM). A modified K-Means clustering algorithm has been developed for clustering purpose. Mainly, the purpose of using modified K-means clustering technique is to group the similar features into (K) templates in order to simulate the differences in the ways that human express each emotion. To evaluate the proposed system, a subset from Cohen-Kanade (CK) dataset have been used, it consists of 870 facial images samples for the seven basic emotions (angry, disgust, fear, happy, normal, sad, and surprise). The conducted test results indicated that SVM classifier can lead to higher performance in comparison with the results of other proposed methods due to its desirable characteristics (such as large-margin separation, good generalization performance, etc.).
In this study, we have created a new Arabic dataset annotated according to Ekman’s basic emotions (Anger, Disgust, Fear, Happiness, Sadness and Surprise). This dataset is composed from Facebook posts written in the Iraqi dialect. We evaluated the quality of this dataset using four external judges which resulted in an average inter-annotation agreement of 0.751. Then we explored six different supervised machine learning methods to test the new dataset. We used Weka standard classifiers ZeroR, J48, Naïve Bayes, Multinomial Naïve Bayes for Text, and SMO. We also used a further compression-based classifier called PPM not included in Weka. Our study reveals that the PPM classifier significantly outperforms other classifiers such as SVM and N
... Show MoreIn this paper we investigate the automatic recognition of emotion in text. We propose a new method for emotion recognition based on the PPM (PPM is short for Prediction by Partial Matching) character-based text compression scheme in order to recognize Ekman’s six basic emotions (Anger, Disgust, Fear, Happiness, Sadness, Surprise). Experimental results with three datasets show that the new method is very effective when compared with traditional word-based text classification methods. We have also found that our method works best if the sizes of text in all classes used for training are similar, and that performance significantly improves with increased data.
With the recent developments of technology and the advances in artificial intelligent and machine learning techniques, it becomes possible for the robot to acquire and show the emotions as a part of Human-Robot Interaction (HRI). An emotional robot can recognize the emotional states of humans so that it will be able to interact more naturally with its human counterpart in different environments. In this article, a survey on emotion recognition for HRI systems has been presented. The survey aims to achieve two objectives. Firstly, it aims to discuss the main challenges that face researchers when building emotional HRI systems. Secondly, it seeks to identify sensing channels that can be used to detect emotions and provides a literature review
... Show MoreMobile-based human emotion recognition is very challenging subject, most of the approaches suggested and built in this field utilized various contexts that can be derived from the external sensors and the smartphone, but these approaches suffer from different obstacles and challenges. The proposed system integrated human speech signal and heart rate, in one system, to leverage the accuracy of the human emotion recognition. The proposed system is designed to recognize four human emotions; angry, happy, sad and normal. In this system, the smartphone is used to record user speech and send it to a server. The smartwatch, fixed on user wrist, is used to measure user heart rate while the user is speaking and send it, via Bluetooth,
... Show MoreRecognizing speech emotions is an important subject in pattern recognition. This work is about studying the effect of extracting the minimum possible number of features on the speech emotion recognition (SER) system. In this paper, three experiments performed to reach the best way that gives good accuracy. The first one extracting only three features: zero crossing rate (ZCR), mean, and standard deviation (SD) from emotional speech samples, the second one extracting only the first 12 Mel frequency cepstral coefficient (MFCC) features, and the last experiment applying feature fusion between the mentioned features. In all experiments, the features are classified using five types of classification techniques, which are the Random Forest (RF),
... Show MoreThis study focusses on the effect of using ICA transform on the classification accuracy of satellite images using the maximum likelihood classifier. The study area represents an agricultural area north of the capital Baghdad - Iraq, as it was captured by the Landsat 8 satellite on 12 January 2021, where the bands of the OLI sensor were used. A field visit was made to a variety of classes that represent the landcover of the study area and the geographical location of these classes was recorded. Gaussian, Kurtosis, and LogCosh kernels were used to perform the ICA transform of the OLI Landsat 8 image. Different training sets were made for each of the ICA and Landsat 8 images separately that used in the classification phase, and used to calcula
... Show MoreThis paper suggest two method of recognition, these methods depend on the extraction of the feature of the principle component analysis when applied on the wavelet domain(multi-wavelet). First method, an idea of increasing the space of recognition, through calculating the eigenstructure of the diagonal sub-image details at five depths of wavelet transform is introduced. The effective eigen range selected here represent the base for image recognition. In second method, an idea of obtaining invariant wavelet space at all projections is presented. A new recursive from that represents invariant space of representing any image resolutions obtained from wavelet transform is adopted. In this way, all the major problems that effect the image and
... Show MoreFacial expressions are a term that expresses a group of movements of the facial fore muscles that is related to one's own human emotions. Human–computer interaction (HCI) has been considered as one of the most attractive and fastest-growing fields. Adding emotional expression’s recognition to expect the users’ feelings and emotional state can drastically improves HCI. This paper aims to demonstrate the three most important facial expressions (happiness, sadness, and surprise). It contains three stages; first, the preprocessing stage was performed to enhance the facial images. Second, the feature extraction stage depended on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) methods. Third, the recognition stage w
... Show More