Mobile-based human emotion recognition is very challenging subject, most of the approaches suggested and built in this field utilized various contexts that can be derived from the external sensors and the smartphone, but these approaches suffer from different obstacles and challenges. The proposed system integrated human speech signal and heart rate, in one system, to leverage the accuracy of the human emotion recognition. The proposed system is designed to recognize four human emotions; angry, happy, sad and normal. In this system, the smartphone is used to record user speech and send it to a server. The smartwatch, fixed on user wrist, is used to measure user heart rate while the user is speaking and send it, via Bluetooth, to the smartphone which in turn sends it to the server. At the server side, the speech features are extracted from the speech signal to be classified by neural network. To minimize the misclassification of the neural network, the user heart rate measurement is used to direct the extracted speech features to either excited (angry and happy) neural network or to the calm (sad and normal) neural network. In spite of the challenges associated with the system, the system achieved 96.49% for known speakers and 79.05% for unknown speakers
The study consists of video clips of all cars parked in the selected area. The studied camera height is1.5 m, and the video clips are 18video clips. Images are extracted from the video clip to be used for training data for the cascade method. Cascade classification is used to detect license plates after the training step. Viola-jones algorithm was applied to the output of the cascade data for camera height (1.5m). The accuracy was calculated for all data with different weather conditions and local time recoding in two ways. The first used the detection of the car plate based on the video clip, and the accuracy was 100%. The second is using the clipped images stored in the positive file, based on the training file (XML file), where the ac
... Show MoreIn this paper an accurate Indian handwritten digits recognition system is
proposed. The system used three proposed method for extracting the most effecting
features to represent the characteristic of each digit. Discrete Wavelet Transform
(DWT) at level one and Fast Cosine Transform (FCT) is used for features extraction
from the thinned image. Besides that, the system used a standard database which is
ADBase database for evaluation. The extracted features were classified with KNearest
Neighbor (KNN) classifier based on cityblock distance function and the
experimental results show that the proposed system achieved 98.2% recognition
rate.
Audio-visual detection and recognition system is thought to become the most promising methods for many applications includes surveillance, speech recognition, eavesdropping devices, intelligence operations, etc. In the recent field of human recognition, the majority of the research be- coming performed presently is focused on the reidentification of various body images taken by several cameras or its focuses on recognized audio-only. However, in some cases these traditional methods can- not be useful when used alone such as in indoor surveillance systems, that are installed close to the ceiling and capture images right from above in a downwards direction and in some cases people don't look straight the cameras or it cannot be added in some
... Show MoreThe Arabic Language is the native tongue of more than 400 million people around the world, it is also a language that carries an important religious and international weight. The Arabic language has taken its share of the huge technological explosion that has swept the world, and therefore it needs to be addressed with natural language processing applications and tasks.
This paper aims to survey and gather the most recent research related to Arabic Part of Speech (APoS), pointing to tagger methods used for the Arabic language, which ought to aim to constructing corpus for Arabic tongue. Many AI investigators and researchers have worked and performed POS utilizing various machine-learning methods, such as Hidden-Mark
... Show MoreHM Al-Dabbas, RA Azeez, AE Ali, IRAQI JOURNAL OF COMPUTERS, COMMUNICATIONS, CONTROL AND SYSTEMS ENGINEERING, 2023
Face recognition is required in various applications, and major progress has been witnessed in this area. Many face recognition algorithms have been proposed thus far; however, achieving high recognition accuracy and low execution time remains a challenge. In this work, a new scheme for face recognition is presented using hybrid orthogonal polynomials to extract features. The embedded image kernel technique is used to decrease the complexity of feature extraction, then a support vector machine is adopted to classify these features. Moreover, a fast-overlapping block processing algorithm for feature extraction is used to reduce the computation time. Extensive evaluation of the proposed method was carried out on two different face ima
... Show MoreThis study proposed a biometric-based digital signature scheme proposed for facial recognition. The scheme is designed and built to verify the person’s identity during a registration process and retrieve their public and private keys stored in the database. The RSA algorithm has been used as asymmetric encryption method to encrypt hashes generated for digital documents. It uses the hash function (SHA-256) to generate digital signatures. In this study, local binary patterns histograms (LBPH) were used for facial recognition. The facial recognition method was evaluated on ORL faces retrieved from the database of Cambridge University. From the analysis, the LBPH algorithm achieved 97.5% accuracy; the real-time testing was done on thirty subj
... Show MoreVisual media is a better way to deliver the information than the old way of "reading". For that reason with the wide propagation of multimedia websites, there are large video library’s archives, which came to be a main resource for humans. This research puts its eyes on the existing development in applying classical phrase search methods to a linked vocal transcript and after that it retrieves the video, this an easier way to search any visual media. This system has been implemented using JSP and Java language for searching the speech in the videos
Its well known that understanding human facial expressions is a key component in understanding emotions and finds broad applications in the field of human-computer interaction (HCI), has been a long-standing issue. In this paper, we shed light on the utilisation of a deep convolutional neural network (DCNN) for facial emotion recognition from videos using the TensorFlow machine-learning library from Google. This work was applied to ten emotions from the Amsterdam Dynamic Facial Expression Set-Bath Intensity Variations (ADFES-BIV) dataset and tested using two datasets.