Preferred Language
Articles
/
joe-651
Mobile-based Human Emotion Recognition based on Speech and Heart rate
...Show More Authors

Mobile-based human emotion recognition is very challenging subject, most of the approaches suggested and built in this field utilized various contexts that can be derived from the external sensors and the smartphone, but these approaches suffer from different obstacles and challenges. The proposed system integrated human speech signal and heart rate, in one system, to leverage the accuracy of the human emotion recognition. The proposed system is designed to recognize four human emotions; angry, happy, sad and normal. In this system, the smartphone is used to   record user speech and send it to a server. The smartwatch, fixed on user wrist, is used to measure user heart rate while the user is speaking and send it, via Bluetooth, to the smartphone which in turn sends it to the server. At the server side, the speech features are extracted from the speech signal to be classified by neural network. To minimize the misclassification of the neural network, the user heart rate measurement is used to direct the extracted speech features to either excited (angry and happy) neural network or to the calm (sad and normal) neural network. In spite of the challenges associated with the system, the system achieved 96.49% for known speakers and 79.05% for unknown speakers

Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Fri May 04 2018
Journal Name
Wireless Personal Communications
IFRS: An Indexed Face Recognition System Based on Face Recognition and RFID Technologies
...Show More Authors

View Publication
Scopus (10)
Crossref (8)
Scopus Clarivate Crossref
Publication Date
Thu Nov 01 2018
Journal Name
International Journal Of Advanced Research In Computer Engineering & Technology
Facial Emotion Recognition: A Survey
...Show More Authors

Emotion could be expressed through unimodal social behaviour’s or bimodal or it could be expressed through multimodal. This survey describes the background of facial emotion recognition and surveys the emotion recognition using visual modality. Some publicly available datasets are covered for performance evaluation. A summary of some of the research efforts to classify emotion using visual modality for the last five years from 2013 to 2018 is given in a tabular form.

Preview PDF
Publication Date
Sun Jun 20 2021
Journal Name
Baghdad Science Journal
Arabic Speech Classification Method Based on Padding and Deep Learning Neural Network
...Show More Authors

Deep learning convolution neural network has been widely used to recognize or classify voice. Various techniques have been used together with convolution neural network to prepare voice data before the training process in developing the classification model. However, not all model can produce good classification accuracy as there are many types of voice or speech. Classification of Arabic alphabet pronunciation is a one of the types of voice and accurate pronunciation is required in the learning of the Qur’an reading. Thus, the technique to process the pronunciation and training of the processed data requires specific approach. To overcome this issue, a method based on padding and deep learning convolution neural network is proposed to

... Show More
View Publication Preview PDF
Scopus (16)
Crossref (2)
Scopus Clarivate Crossref
Publication Date
Sun Jan 10 2016
Journal Name
British Journal Of Applied Science & Technology
The Effect of Classification Methods on Facial Emotion Recognition ‎Accuracy
...Show More Authors

The interests toward developing accurate automatic face emotion recognition methodologies are growing vastly, and it is still one of an ever growing research field in the region of computer vision, artificial intelligent and automation. However, there is a challenge to build an automated system which equals human ability to recognize facial emotion because of the lack of an effective facial feature descriptor and the difficulty of choosing proper classification method. In this paper, a geometric based feature vector has been proposed. For the classification purpose, three different types of classification methods are tested: statistical, artificial neural network (NN) and Support Vector Machine (SVM). A modified K-Means clustering algorithm

... Show More
View Publication Preview PDF
Crossref (2)
Crossref
Publication Date
Tue Jan 01 2019
Journal Name
Ieee Access
Speech Enhancement Algorithm Based on Super-Gaussian Modeling and Orthogonal Polynomials
...Show More Authors

View Publication
Scopus (41)
Crossref (39)
Scopus Clarivate Crossref
Publication Date
Tue Nov 21 2017
Journal Name
Lecture Notes In Computer Science
Emotion Recognition in Text Using PPM
...Show More Authors

In this paper we investigate the automatic recognition of emotion in text. We propose a new method for emotion recognition based on the PPM (PPM is short for Prediction by Partial Matching) character-based text compression scheme in order to recognize Ekman’s six basic emotions (Anger, Disgust, Fear, Happiness, Sadness, Surprise). Experimental results with three datasets show that the new method is very effective when compared with traditional word-based text classification methods. We have also found that our method works best if the sizes of text in all classes used for training are similar, and that performance significantly improves with increased data.

View Publication
Scopus (6)
Crossref (5)
Scopus Clarivate Crossref
Publication Date
Sun Jan 01 2017
Journal Name
Ieee Access
Low-Distortion MMSE Speech Enhancement Estimator Based on Laplacian Prior
...Show More Authors

View Publication
Scopus (39)
Crossref (36)
Scopus Clarivate Crossref
Publication Date
Wed Nov 25 2015
Journal Name
Research Journal Of Applied Sciences, Engineering And Technology
Subject Independent Facial Emotion Classification Using Geometric Based Features
...Show More Authors

Accurate emotion categorization is an important and challenging task in computer vision and image processing fields. Facial emotion recognition system implies three important stages: Prep-processing and face area allocation, feature extraction and classification. In this study a new system based on geometric features (distances and angles) set derived from the basic facial components such as eyes, eyebrows and mouth using analytical geometry calculations. For classification stage feed forward neural network classifier is used. For evaluation purpose the Standard database "JAFFE" have been used as test material; it holds face samples for seven basic emotions. The results of conducted tests indicate that the use of suggested distances, angles

... Show More
View Publication Preview PDF
Crossref (1)
Crossref
Publication Date
Tue Feb 01 2022
Journal Name
Int. J. Nonlinear Anal. Appl.
Finger Vein Recognition Based on PCA and Fusion Convolutional Neural Network
...Show More Authors

Finger vein recognition and user identification is a relatively recent biometric recognition technology with a broad variety of applications, and biometric authentication is extensively employed in the information age. As one of the most essential authentication technologies available today, finger vein recognition captures our attention owing to its high level of security, dependability, and track record of performance. Embedded convolutional neural networks are based on the early or intermediate fusing of input. In early fusion, pictures are categorized according to their location in the input space. In this study, we employ a highly optimized network and late fusion rather than early fusion to create a Fusion convolutional neural network

... Show More
Publication Date
Fri Jan 20 2023
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Automatic Detection and Recognition of Car Plates Based on Cascade Classifier
...Show More Authors

The study consists of video clips of all cars parked in the selected area. The studied camera height is1.5 m, and the video clips are 18video clips. Images are extracted from the video clip to be used for training data for the cascade method. Cascade classification is used to detect license plates after the training step. Viola-jones algorithm was applied to the output of the cascade data for camera height (1.5m). The accuracy was calculated for all data with different weather conditions and local time recoding in two ways. The first used the detection of the car plate based on the video clip, and the accuracy was 100%. The second is using the clipped images stored in the positive file, based on the training file (XML file), where the ac

... Show More
View Publication Preview PDF
Crossref