Preferred Language
Articles
/
8hYDe4cBVTCNdQwClFPP
Speech Emotion Recognition Using Minimum Extracted Features
...Show More Authors

Recognizing speech emotions is an important subject in pattern recognition. This work is about studying the effect of extracting the minimum possible number of features on the speech emotion recognition (SER) system. In this paper, three experiments performed to reach the best way that gives good accuracy. The first one extracting only three features: zero crossing rate (ZCR), mean, and standard deviation (SD) from emotional speech samples, the second one extracting only the first 12 Mel frequency cepstral coefficient (MFCC) features, and the last experiment applying feature fusion between the mentioned features. In all experiments, the features are classified using five types of classification techniques, which are the Random Forest (RF), k-Nearest Neighbor (k-NN), Sequential Minimal Optimization (SMO), Naïve Bayes (NB), and Decision Tree (DT). The performance of the system validated over Surrey Audio-Visual Expressed Emotion (SAVEE) dataset for seven emotions. The results of the experiments showed given good accuracy compared with the previous studies using a fusion of a few numbers of features with the RF classifier.

Scopus Clarivate Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Sat Oct 31 2020
Journal Name
International Journal Of Intelligent Engineering And Systems
Speech Emotion Recognition Using MELBP Variants of Spectrogram Image
...Show More Authors

View Publication Preview PDF
Scopus (5)
Crossref (1)
Scopus Crossref
Publication Date
Tue Oct 29 2019
Journal Name
Journal Of Engineering
Mobile-based Human Emotion Recognition based on Speech and Heart rate
...Show More Authors

Mobile-based human emotion recognition is very challenging subject, most of the approaches suggested and built in this field utilized various contexts that can be derived from the external sensors and the smartphone, but these approaches suffer from different obstacles and challenges. The proposed system integrated human speech signal and heart rate, in one system, to leverage the accuracy of the human emotion recognition. The proposed system is designed to recognize four human emotions; angry, happy, sad and normal. In this system, the smartphone is used to   record user speech and send it to a server. The smartwatch, fixed on user wrist, is used to measure user heart rate while the user is speaking and send it, via Bluetooth,

... Show More
View Publication Preview PDF
Crossref
Publication Date
Tue Nov 21 2017
Journal Name
Lecture Notes In Computer Science
Emotion Recognition in Text Using PPM
...Show More Authors

In this paper we investigate the automatic recognition of emotion in text. We propose a new method for emotion recognition based on the PPM (PPM is short for Prediction by Partial Matching) character-based text compression scheme in order to recognize Ekman’s six basic emotions (Anger, Disgust, Fear, Happiness, Sadness, Surprise). Experimental results with three datasets show that the new method is very effective when compared with traditional word-based text classification methods. We have also found that our method works best if the sizes of text in all classes used for training are similar, and that performance significantly improves with increased data.

View Publication
Scopus (6)
Crossref (5)
Scopus Clarivate Crossref
Publication Date
Thu Jul 01 2021
Journal Name
Computers & Electrical Engineering
A new proposed statistical feature extraction method in speech emotion recognition
...Show More Authors

View Publication
Scopus (25)
Crossref (20)
Scopus Clarivate Crossref
Publication Date
Wed Jan 13 2021
Journal Name
Iraqi Journal Of Science
YouTube Keyword Search Engine Using Speech Recognition
...Show More Authors

Visual media is a better way to deliver the information than the old way of "reading". For that reason with the wide propagation of multimedia websites, there are large video library’s archives, which came to be a main resource for humans. This research puts its eyes on the existing development in applying classical phrase search methods to a linked vocal transcript and after that it retrieves the video, this an easier way to search any visual media. This system has been implemented using JSP and Java language for searching the speech in the videos

View Publication Preview PDF
Scopus (1)
Scopus Crossref
Publication Date
Thu Nov 01 2018
Journal Name
International Journal Of Advanced Research In Computer Engineering & Technology
Facial Emotion Recognition: A Survey
...Show More Authors

Emotion could be expressed through unimodal social behaviour’s or bimodal or it could be expressed through multimodal. This survey describes the background of facial emotion recognition and surveys the emotion recognition using visual modality. Some publicly available datasets are covered for performance evaluation. A summary of some of the research efforts to classify emotion using visual modality for the last five years from 2013 to 2018 is given in a tabular form.

Preview PDF
Publication Date
Tue Jan 01 2019
Journal Name
International Journal Of Machine Learning And Computing
Facial Emotion Recognition from Videos Using Deep Convolutional Neural Networks
...Show More Authors

Its well known that understanding human facial expressions is a key component in understanding emotions and finds broad applications in the field of human-computer interaction (HCI), has been a long-standing issue. In this paper, we shed light on the utilisation of a deep convolutional neural network (DCNN) for facial emotion recognition from videos using the TensorFlow machine-learning library from Google. This work was applied to ten emotions from the Amsterdam Dynamic Facial Expression Set-Bath Intensity Variations (ADFES-BIV) dataset and tested using two datasets.

View Publication Preview PDF
Scopus (42)
Crossref (32)
Scopus Crossref
Publication Date
Sat Jun 01 2013
Journal Name
مجلة كلية بغداد للعلوم الاقتصادية الجامعة
Proposed family speech recognition
...Show More Authors

Speech recognition is a very important field that can be used in many applications such as controlling to protect area, banking, transaction over telephone network database access service, voice email, investigations, House controlling and management ... etc. Speech recognition systems can be used in two modes: to identify a particular person or to verify a person’s claimed identity. The family speaker recognition is a modern field in the speaker recognition. Many family speakers have similarity in the characteristics and hard to identify between them. Today, the scope of speech recognition is limited to speech collected from cooperative users in real world office environments and without adverse microphone or channel impairments.

Publication Date
Wed Nov 25 2015
Journal Name
Research Journal Of Applied Sciences, Engineering And Technology
Subject Independent Facial Emotion Classification Using Geometric Based Features
...Show More Authors

Accurate emotion categorization is an important and challenging task in computer vision and image processing fields. Facial emotion recognition system implies three important stages: Prep-processing and face area allocation, feature extraction and classification. In this study a new system based on geometric features (distances and angles) set derived from the basic facial components such as eyes, eyebrows and mouth using analytical geometry calculations. For classification stage feed forward neural network classifier is used. For evaluation purpose the Standard database "JAFFE" have been used as test material; it holds face samples for seven basic emotions. The results of conducted tests indicate that the use of suggested distances, angles

... Show More
View Publication Preview PDF
Crossref (1)
Crossref
Publication Date
Mon Jun 05 2023
Journal Name
Journal Of Engineering
Isolated Word Speech Recognition Using Mixed Transform
...Show More Authors

Methods of speech recognition have been the subject of several studies over the past decade. Speech recognition has been one of the most exciting areas of the signal processing. Mixed transform is a useful tool for speech signal processing; it is developed for its abilities of improvement in feature extraction. Speech recognition includes three important stages, preprocessing, feature extraction, and classification. Recognition accuracy is so affected by the features extraction stage; therefore different models of mixed transform for feature extraction were proposed. The properties of the recorded isolated word will be 1-D, which achieve the conversion of each 1-D word into a 2-D form. The second step of the word recognizer requires, the

... Show More
View Publication Preview PDF
Crossref (1)
Crossref