Audio-visual detection and recognition system is thought to become the most promising methods for many applications includes surveillance, speech recognition, eavesdropping devices, intelligence operations, etc. In the recent field of human recognition, the majority of the research be- coming performed presently is focused on the reidentification of various body images taken by several cameras or its focuses on recognized audio-only. However, in some cases these traditional methods can- not be useful when used alone such as in indoor surveillance systems, that are installed close to the ceiling and capture images right from above in a downwards direction and in some cases people don't look straight the cameras or it cannot be added in some area such as W.C. or sleeping room. Thus, its commonly difficult to identify any movement or breakthrough process, on the other hand when need to pursue suspect when enter a building or party to identify his location and/or listen to his speech only and isolate it from other voices or noises, the other. Hence, the use of the hybrid combination technique is very effective. In this work, we proposed a multimodal human recognition approach that utilizes both the face and audio and is based upon a deep convolutional neural network (CNN). Mainly, to solve the challenge of not capturing part of the body, final results of recognizing via separate CNNs of VGG Face16 and ResNet50 are joined together depending on the score-level combination by Weighted Sum rule to enhance recognition performance. The results show that the proposed system success to recognise each person from his voice and/or his face captured. In addition, the system can separate the person voice and isolate it from noisy environment and determine the existence of desired person.
Mobile-based human emotion recognition is very challenging subject, most of the approaches suggested and built in this field utilized various contexts that can be derived from the external sensors and the smartphone, but these approaches suffer from different obstacles and challenges. The proposed system integrated human speech signal and heart rate, in one system, to leverage the accuracy of the human emotion recognition. The proposed system is designed to recognize four human emotions; angry, happy, sad and normal. In this system, the smartphone is used to record user speech and send it to a server. The smartwatch, fixed on user wrist, is used to measure user heart rate while the user is speaking and send it, via Bluetooth,
... Show MoreFace recognition, emotion recognition represent the important bases for the human machine interaction. To recognize the person’s emotion and face, different algorithms are developed and tested. In this paper, an enhancement face and emotion recognition algorithm is implemented based on deep learning neural networks. Universal database and personal image had been used to test the proposed algorithm. Python language programming had been used to implement the proposed algorithm.
The control of prostheses and their complexities is one of the greatest challenges limiting wide amputees’ use of upper limb prostheses. The main challenges include the difficulty of extracting signals for controlling the prostheses, limited number of degrees of freedom (DoF), and cost-prohibitive for complex controlling systems. In this study, a real-time hybrid control system, based on electromyography (EMG) and voice commands (VC) is designed to render the prosthesis more dexterous with the ability to accomplish amputee’s daily activities proficiently. The voice and EMG systems were combined in three proposed hybrid strategies, each strategy had different number of movements depending on the combination protocol between voic
... Show MoreIn this paper we investigate the automatic recognition of emotion in text. We propose a new method for emotion recognition based on the PPM (PPM is short for Prediction by Partial Matching) character-based text compression scheme in order to recognize Ekman’s six basic emotions (Anger, Disgust, Fear, Happiness, Sadness, Surprise). Experimental results with three datasets show that the new method is very effective when compared with traditional word-based text classification methods. We have also found that our method works best if the sizes of text in all classes used for training are similar, and that performance significantly improves with increased data.
Information processing has an important application which is speech recognition. In this paper, a two hybrid techniques have been presented. The first one is a 3-level hybrid of Stationary Wavelet Transform (S) and Discrete Wavelet Transform (W) and the second one is a 3-level hybrid of Discrete Wavelet Transform (W) and Multi-wavelet Transforms (M). To choose the best 3-level hybrid in each technique, a comparison according to five factors has been implemented and the best results are WWS, WWW, and MWM. Speech recognition is performed on WWS, WWW, and MWM using Euclidean distance (Ecl) and Dynamic Time Warping (DTW). The match performance is (98%) using DTW in MWM, while in the WWS and WWW are (74%) and (78%) respectively, but when using (
... Show MoreEmotion recognition has important applications in human-computer interaction. Various sources such as facial expressions and speech have been considered for interpreting human emotions. The aim of this paper is to develop an emotion recognition system from facial expressions and speech using a hybrid of machine-learning algorithms in order to enhance the overall performance of human computer communication. For facial emotion recognition, a deep convolutional neural network is used for feature extraction and classification, whereas for speech emotion recognition, the zero-crossing rate, mean, standard deviation and mel frequency cepstral coefficient features are extracted. The extracted features are then fed to a random forest classifier. In
... Show MoreThe area of character recognition has received a considerable attention by researchers all over the world during the last three decades. However, this research explores best sets of feature extraction techniques and studies the accuracy of well-known classifiers for Arabic numeral using the Statistical styles in two methods and making comparison study between them. First method Linear Discriminant function that is yield results with accuracy as high as 90% of original grouped cases correctly classified. In the second method, we proposed algorithm, The results show the efficiency of the proposed algorithms, where it is found to achieve recognition accuracy of 92.9% and 91.4%. This is providing efficiency more than the first method.
Document analysis of images snapped by camera is a growing challenge. These photos are often poor-quality compound images, composed of various objects and text; this makes automatic analysis complicated. OCR is one of the image processing techniques which is used to perform automatic identification of texts. Existing image processing techniques need to manage many parameters in order to clearly recognize the text in such pictures. Segmentation is regarded one of these essential parameters. This paper discusses the accuracy of segmentation process and its effect over the recognition process. According to the proposed method, the images were firstly filtered using the wiener filter then the active contour algorithm could b
... Show More