Preferred Language
Articles
/
VxYM44sBVTCNdQwCdeOJ
Emotion Recognition Based on Mining Sub-Graphs of Facial Components
...Show More Authors

Facial emotion recognition finds many real applications in the daily life like human robot interaction, eLearning, healthcare, customer services etc. The task of facial emotion recognition is not easy due to the difficulty in determining the effective feature set that can recognize the emotion conveyed within the facial expression accurately. Graph mining techniques are exploited in this paper to solve facial emotion recognition problem. After determining positions of facial landmarks in face region, twelve different graphs are constructed using four facial components to serve as a source for sub-graphs mining stage using gSpan algorithm. In each group, the discriminative set of sub-graphs are selected and fed to Deep Belief Network (DBN) for classification purpose. The results obtained from the different groups are then fused using Naïve Bayes classifier to make the final decision regards the emotion class. Different tests were performed using Surrey Audio-Visual Expressed Emotion (SAVEE) database and the achieved results showed that the system gives the desired accuracy (100%) when fusion decisions of the facial groups. The achieved result outperforms state-of-the-art results on the same database.

Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Fri Jun 11 2021
Journal Name
Journal Of Computing And Information Technology
A Survey on Emotion Recognition for Human Robot Interaction
...Show More Authors

With the recent developments of technology and the advances in artificial intelligent and machine learning techniques, it becomes possible for the robot to acquire and show the emotions as a part of Human-Robot Interaction (HRI). An emotional robot can recognize the emotional states of humans so that it will be able to interact more naturally with its human counterpart in different environments. In this article, a survey on emotion recognition for HRI systems has been presented. The survey aims to achieve two objectives. Firstly, it aims to discuss the main challenges that face researchers when building emotional HRI systems. Secondly, it seeks to identify sensing channels that can be used to detect emotions and provides a literature review

... Show More
View Publication Preview PDF
Scopus (15)
Crossref (4)
Scopus Crossref
Publication Date
Tue Nov 21 2017
Journal Name
Lecture Notes In Computer Science
Emotion Recognition in Text Using PPM
...Show More Authors

In this paper we investigate the automatic recognition of emotion in text. We propose a new method for emotion recognition based on the PPM (PPM is short for Prediction by Partial Matching) character-based text compression scheme in order to recognize Ekman’s six basic emotions (Anger, Disgust, Fear, Happiness, Sadness, Surprise). Experimental results with three datasets show that the new method is very effective when compared with traditional word-based text classification methods. We have also found that our method works best if the sizes of text in all classes used for training are similar, and that performance significantly improves with increased data.

View Publication
Scopus (6)
Crossref (5)
Scopus Clarivate Crossref
Publication Date
Thu Jan 13 2022
Journal Name
Medical & Biological Engineering & Computing
An integrated entropy-spatial framework for automatic gender recognition enhancement of emotion-based EEGs
...Show More Authors

View Publication
Scopus (11)
Crossref (12)
Scopus Clarivate Crossref
Publication Date
Thu Nov 01 2018
Journal Name
2018 1st Annual International Conference On Information And Sciences (aicis)
Speech Emotion Recognition Using Minimum Extracted Features
...Show More Authors

Recognizing speech emotions is an important subject in pattern recognition. This work is about studying the effect of extracting the minimum possible number of features on the speech emotion recognition (SER) system. In this paper, three experiments performed to reach the best way that gives good accuracy. The first one extracting only three features: zero crossing rate (ZCR), mean, and standard deviation (SD) from emotional speech samples, the second one extracting only the first 12 Mel frequency cepstral coefficient (MFCC) features, and the last experiment applying feature fusion between the mentioned features. In all experiments, the features are classified using five types of classification techniques, which are the Random Forest (RF),

... Show More
View Publication Preview PDF
Scopus (2)
Crossref (1)
Scopus Clarivate Crossref
Publication Date
Wed Jul 17 2019
Journal Name
Advances In Intelligent Systems And Computing
A New Arabic Dataset for Emotion Recognition
...Show More Authors

In this study, we have created a new Arabic dataset annotated according to Ekman’s basic emotions (Anger, Disgust, Fear, Happiness, Sadness and Surprise). This dataset is composed from Facebook posts written in the Iraqi dialect. We evaluated the quality of this dataset using four external judges which resulted in an average inter-annotation agreement of 0.751. Then we explored six different supervised machine learning methods to test the new dataset. We used Weka standard classifiers ZeroR, J48, Naïve Bayes, Multinomial Naïve Bayes for Text, and SMO. We also used a further compression-based classifier called PPM not included in Weka. Our study reveals that the PPM classifier significantly outperforms other classifiers such as SVM and N

... Show More
View Publication
Scopus (15)
Crossref (7)
Scopus Crossref
Publication Date
Sat Oct 31 2020
Journal Name
International Journal Of Intelligent Engineering And Systems
Speech Emotion Recognition Using MELBP Variants of Spectrogram Image
...Show More Authors

View Publication Preview PDF
Scopus (7)
Crossref (1)
Scopus Crossref
Publication Date
Sun Feb 25 2024
Journal Name
Baghdad Science Journal
Research on Emotion Classification Based on Multi-modal Fusion
...Show More Authors

Nowadays, people's expression on the Internet is no longer limited to text, especially with the rise of the short video boom, leading to the emergence of a large number of modal data such as text, pictures, audio, and video. Compared to single mode data ,the multi-modal data always contains massive information. The mining process of multi-modal information can help computers to better understand human emotional characteristics. However, because the multi-modal data show obvious dynamic time series features, it is necessary to solve the dynamic correlation problem within a single mode and between different modes in the same application scene during the fusion process. To solve this problem, in this paper, a feature extraction framework of

... Show More
View Publication Preview PDF
Scopus (1)
Scopus Crossref
Publication Date
Sun Jan 10 2016
Journal Name
British Journal Of Applied Science & Technology
Illumination - Invariant Facial Components Extraction Using Adaptive Contrast Enhancement Methods
...Show More Authors

The process of accurate localization of the basic components of human faces (i.e., eyebrows, eyes, nose, mouth, etc.) from images is an important step in face processing techniques like face tracking, facial expression recognition or face recognition. However, it is a challenging task due to the variations in scale, orientation, pose, facial expressions, partial occlusions and lighting conditions. In the current paper, a scheme includes the method of three-hierarchal stages for facial components extraction is presented; it works regardless of illumination variance. Adaptive linear contrast enhancement methods like gamma correction and contrast stretching are used to simulate the variance in light condition among images. As testing material

... Show More
View Publication Preview PDF
Crossref (1)
Crossref
Publication Date
Thu Jul 01 2021
Journal Name
Computers & Electrical Engineering
A new proposed statistical feature extraction method in speech emotion recognition
...Show More Authors

View Publication
Scopus (30)
Crossref (21)
Scopus Clarivate Crossref
Publication Date
Wed Jan 12 2022
Journal Name
Iraqi Journal Of Science
Palm Vein Recognition Based on Centerline
...Show More Authors

The palm vein recognition is one of the biometric systems that use for identification and verification processes since each person have unique characteristics for the veins. In this paper we can improvement palm vein recognition system have been made. The system based on centerline extraction of veins, and employs the concept of Difference-of Gaussian (DoG) Function to construct features vector. The tests results on our database showed that the identification rate is 100 % with the minimum error rate was 0.333.

View Publication Preview PDF