Preferred Language
Articles
/
YReTP48BVTCNdQwCBGbf
A new proposed statistical feature extraction method in speech emotion recognition
...Show More Authors

Scopus Clarivate Crossref
View Publication
Publication Date
Sat Jan 01 2022
Journal Name
Proceedings Of International Conference On Computing And Communication Networks
Speech Gender Recognition Using a Multilayer Feature Extraction Method
...Show More Authors

View Publication
Scopus (1)
Crossref (1)
Scopus Clarivate Crossref
Publication Date
Thu Nov 01 2018
Journal Name
2018 1st Annual International Conference On Information And Sciences (aicis)
Speech Emotion Recognition Using Minimum Extracted Features
...Show More Authors

Recognizing speech emotions is an important subject in pattern recognition. This work is about studying the effect of extracting the minimum possible number of features on the speech emotion recognition (SER) system. In this paper, three experiments performed to reach the best way that gives good accuracy. The first one extracting only three features: zero crossing rate (ZCR), mean, and standard deviation (SD) from emotional speech samples, the second one extracting only the first 12 Mel frequency cepstral coefficient (MFCC) features, and the last experiment applying feature fusion between the mentioned features. In all experiments, the features are classified using five types of classification techniques, which are the Random Forest (RF),

... Show More
View Publication Preview PDF
Scopus (2)
Crossref (1)
Scopus Clarivate Crossref
Publication Date
Sat Jun 01 2013
Journal Name
مجلة كلية بغداد للعلوم الاقتصادية الجامعة
Proposed family speech recognition
...Show More Authors

Speech recognition is a very important field that can be used in many applications such as controlling to protect area, banking, transaction over telephone network database access service, voice email, investigations, House controlling and management ... etc. Speech recognition systems can be used in two modes: to identify a particular person or to verify a person’s claimed identity. The family speaker recognition is a modern field in the speaker recognition. Many family speakers have similarity in the characteristics and hard to identify between them. Today, the scope of speech recognition is limited to speech collected from cooperative users in real world office environments and without adverse microphone or channel impairments.

Publication Date
Wed Jul 17 2019
Journal Name
Advances In Intelligent Systems And Computing
A New Arabic Dataset for Emotion Recognition
...Show More Authors

In this study, we have created a new Arabic dataset annotated according to Ekman’s basic emotions (Anger, Disgust, Fear, Happiness, Sadness and Surprise). This dataset is composed from Facebook posts written in the Iraqi dialect. We evaluated the quality of this dataset using four external judges which resulted in an average inter-annotation agreement of 0.751. Then we explored six different supervised machine learning methods to test the new dataset. We used Weka standard classifiers ZeroR, J48, Naïve Bayes, Multinomial Naïve Bayes for Text, and SMO. We also used a further compression-based classifier called PPM not included in Weka. Our study reveals that the PPM classifier significantly outperforms other classifiers such as SVM and N

... Show More
View Publication
Scopus (15)
Crossref (7)
Scopus Crossref
Publication Date
Sat Oct 31 2020
Journal Name
International Journal Of Intelligent Engineering And Systems
Speech Emotion Recognition Using MELBP Variants of Spectrogram Image
...Show More Authors

View Publication Preview PDF
Scopus (5)
Crossref (1)
Scopus Crossref
Publication Date
Tue Oct 29 2019
Journal Name
Journal Of Engineering
Mobile-based Human Emotion Recognition based on Speech and Heart rate
...Show More Authors

Mobile-based human emotion recognition is very challenging subject, most of the approaches suggested and built in this field utilized various contexts that can be derived from the external sensors and the smartphone, but these approaches suffer from different obstacles and challenges. The proposed system integrated human speech signal and heart rate, in one system, to leverage the accuracy of the human emotion recognition. The proposed system is designed to recognize four human emotions; angry, happy, sad and normal. In this system, the smartphone is used to   record user speech and send it to a server. The smartwatch, fixed on user wrist, is used to measure user heart rate while the user is speaking and send it, via Bluetooth,

... Show More
View Publication Preview PDF
Crossref
Publication Date
Wed Aug 01 2018
Journal Name
Engineering And Technology Journal
A Proposed Method for the Sound Recognition Process
...Show More Authors

View Publication
Crossref
Publication Date
Thu Nov 01 2018
Journal Name
International Journal Of Advanced Research In Computer Engineering & Technology
Facial Emotion Recognition: A Survey
...Show More Authors

Emotion could be expressed through unimodal social behaviour’s or bimodal or it could be expressed through multimodal. This survey describes the background of facial emotion recognition and surveys the emotion recognition using visual modality. Some publicly available datasets are covered for performance evaluation. A summary of some of the research efforts to classify emotion using visual modality for the last five years from 2013 to 2018 is given in a tabular form.

Preview PDF
Publication Date
Sun Oct 01 2017
Journal Name
Ieee Transactions On Neural Systems And Rehabilitation Engineering
A Framework of Temporal-Spatial Descriptors-Based Feature Extraction for Improved Myoelectric Pattern Recognition
...Show More Authors

View Publication
Scopus (107)
Crossref (108)
Scopus Clarivate Crossref
Publication Date
Sun Apr 23 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Proposed Speech Analyses Method Using the Multiwavelet Transform
...Show More Authors

  Speech is the first invented way of communication that human used age before the invention of writing. In this paper, proposed method for speech analyses to extract features by using multiwavelet Transform (Repeated Row Preprocessing).The proposed system depends on the Euclidian differences of the coefficients of the multiwavelet Transform to determine the beast features of speech recognition. Each sample value in the reference file is computed by taking the average value of four samples for the same data (four speakers for the same phoneme). The result of the input data to every frame value in the reference file using the Euclidian distance to determine the frame with the minimum distance is said to be the "Best Match". Simulatio

... Show More
View Publication Preview PDF