Preferred Language
Articles
/
Phbv4osBVTCNdQwCveOe
Speech Emotion Recognition Using MELBP Variants of Spectrogram Image

Scopus Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Thu Nov 01 2018
Journal Name
2018 1st Annual International Conference On Information And Sciences (aicis)
Speech Emotion Recognition Using Minimum Extracted Features

Recognizing speech emotions is an important subject in pattern recognition. This work is about studying the effect of extracting the minimum possible number of features on the speech emotion recognition (SER) system. In this paper, three experiments performed to reach the best way that gives good accuracy. The first one extracting only three features: zero crossing rate (ZCR), mean, and standard deviation (SD) from emotional speech samples, the second one extracting only the first 12 Mel frequency cepstral coefficient (MFCC) features, and the last experiment applying feature fusion between the mentioned features. In all experiments, the features are classified using five types of classification techniques, which are the Random Forest (RF),

... Show More
Scopus (2)
Crossref (1)
Scopus Clarivate Crossref
View Publication Preview PDF
Publication Date
Tue Oct 29 2019
Journal Name
Journal Of Engineering
Mobile-based Human Emotion Recognition based on Speech and Heart rate

Mobile-based human emotion recognition is very challenging subject, most of the approaches suggested and built in this field utilized various contexts that can be derived from the external sensors and the smartphone, but these approaches suffer from different obstacles and challenges. The proposed system integrated human speech signal and heart rate, in one system, to leverage the accuracy of the human emotion recognition. The proposed system is designed to recognize four human emotions; angry, happy, sad and normal. In this system, the smartphone is used to   record user speech and send it to a server. The smartwatch, fixed on user wrist, is used to measure user heart rate while the user is speaking and send it, via Bluetooth,

... Show More
Crossref
View Publication Preview PDF
Publication Date
Thu Jul 01 2021
Journal Name
Computers & Electrical Engineering
Scopus (25)
Crossref (20)
Scopus Clarivate Crossref
View Publication
Publication Date
Tue Nov 21 2017
Journal Name
Lecture Notes In Computer Science
Emotion Recognition in Text Using PPM

In this paper we investigate the automatic recognition of emotion in text. We propose a new method for emotion recognition based on the PPM (PPM is short for Prediction by Partial Matching) character-based text compression scheme in order to recognize Ekman’s six basic emotions (Anger, Disgust, Fear, Happiness, Sadness, Surprise). Experimental results with three datasets show that the new method is very effective when compared with traditional word-based text classification methods. We have also found that our method works best if the sizes of text in all classes used for training are similar, and that performance significantly improves with increased data.

Scopus (6)
Crossref (5)
Scopus Clarivate Crossref
View Publication
Publication Date
Thu Nov 01 2018
Journal Name
International Journal Of Advanced Research In Computer Engineering & Technology
Facial Emotion Recognition: A Survey

Emotion could be expressed through unimodal social behaviour’s or bimodal or it could be expressed through multimodal. This survey describes the background of facial emotion recognition and surveys the emotion recognition using visual modality. Some publicly available datasets are covered for performance evaluation. A summary of some of the research efforts to classify emotion using visual modality for the last five years from 2013 to 2018 is given in a tabular form.

Preview PDF
Publication Date
Wed Jan 13 2021
Journal Name
Iraqi Journal Of Science
YouTube Keyword Search Engine Using Speech Recognition

Visual media is a better way to deliver the information than the old way of "reading". For that reason with the wide propagation of multimedia websites, there are large video library’s archives, which came to be a main resource for humans. This research puts its eyes on the existing development in applying classical phrase search methods to a linked vocal transcript and after that it retrieves the video, this an easier way to search any visual media. This system has been implemented using JSP and Java language for searching the speech in the videos

Scopus (1)
Scopus Crossref
View Publication Preview PDF
Publication Date
Tue Jan 01 2019
Journal Name
International Journal Of Machine Learning And Computing
Facial Emotion Recognition from Videos Using Deep Convolutional Neural Networks

Its well known that understanding human facial expressions is a key component in understanding emotions and finds broad applications in the field of human-computer interaction (HCI), has been a long-standing issue. In this paper, we shed light on the utilisation of a deep convolutional neural network (DCNN) for facial emotion recognition from videos using the TensorFlow machine-learning library from Google. This work was applied to ten emotions from the Amsterdam Dynamic Facial Expression Set-Bath Intensity Variations (ADFES-BIV) dataset and tested using two datasets.

Scopus (42)
Crossref (32)
Scopus Crossref
View Publication Preview PDF
Publication Date
Mon Jun 05 2023
Journal Name
Journal Of Engineering
Isolated Word Speech Recognition Using Mixed Transform

Methods of speech recognition have been the subject of several studies over the past decade. Speech recognition has been one of the most exciting areas of the signal processing. Mixed transform is a useful tool for speech signal processing; it is developed for its abilities of improvement in feature extraction. Speech recognition includes three important stages, preprocessing, feature extraction, and classification. Recognition accuracy is so affected by the features extraction stage; therefore different models of mixed transform for feature extraction were proposed. The properties of the recorded isolated word will be 1-D, which achieve the conversion of each 1-D word into a 2-D form. The second step of the word recognizer requires, the

... Show More
Crossref (1)
Crossref
View Publication Preview PDF
Publication Date
Fri Apr 15 2016
Journal Name
International Journal Of Computer Applications
Hybrid Techniques based Speech Recognition

Information processing has an important application which is speech recognition. In this paper, a two hybrid techniques have been presented. The first one is a 3-level hybrid of Stationary Wavelet Transform (S) and Discrete Wavelet Transform (W) and the second one is a 3-level hybrid of Discrete Wavelet Transform (W) and Multi-wavelet Transforms (M). To choose the best 3-level hybrid in each technique, a comparison according to five factors has been implemented and the best results are WWS, WWW, and MWM. Speech recognition is performed on WWS, WWW, and MWM using Euclidean distance (Ecl) and Dynamic Time Warping (DTW). The match performance is (98%) using DTW in MWM, while in the WWS and WWW are (74%) and (78%) respectively, but when using (

... Show More
Crossref
View Publication
Publication Date
Sat Jun 01 2013
Journal Name
مجلة كلية بغداد للعلوم الاقتصادية الجامعة
Proposed family speech recognition

Speech recognition is a very important field that can be used in many applications such as controlling to protect area, banking, transaction over telephone network database access service, voice email, investigations, House controlling and management ... etc. Speech recognition systems can be used in two modes: to identify a particular person or to verify a person’s claimed identity. The family speaker recognition is a modern field in the speaker recognition. Many family speakers have similarity in the characteristics and hard to identify between them. Today, the scope of speech recognition is limited to speech collected from cooperative users in real world office environments and without adverse microphone or channel impairments.