Emotion recognition has important applications in human-computer interaction. Various sources such as facial expressions and speech have been considered for interpreting human emotions. The aim of this paper is to develop an emotion recognition system from facial expressions and speech using a hybrid of machine-learning algorithms in order to enhance the overall performance of human computer communication. For facial emotion recognition, a deep convolutional neural network is used for feature extraction and classification, whereas for speech emotion recognition, the zero-crossing rate, mean, standard deviation and mel frequency cepstral coefficient features are extracted. The extracted features are then fed to a random forest classifier. In addition, a bi-modal system for recognising emotions from facial expressions and speech signals is presented. This is important since one modality may not provide sufficient information or may not be available for any reason beyond operator control. To perform this, decision-level fusion is performed using a novel way for weighting according to the proportions of facial and speech impressions. The results show an average accuracy of 93.22 %.
This paper aims to build a modern vision for Islamic banks to ensure sustainability and growth, as well it aims to highlight the positive Iraqi steps in the Islamic banking sector. In order to build this vision, several scientific research approaches were adopted (quantitative, descriptive analytical, descriptive). As for the research community, it was for all the Iraqi private commercial banks, including Islamic banks. The research samples varied according to a diversity of the methods and the data availability. A questionnaire was constructed and conducted, measuring internal and external honesty. 50 questionnaires were distributed to Iraqi academic specialized in Islamic banking. All distributed forms were subject to a thorough analys
... Show MoreThis paper presents a study of wavelet self-organizing maps (WSOM) for face recognition. The WSOM is a feed forward network that estimates optimized wavelet based for the discrete wavelet transform (DWT) on the basis of the distribution of the input data, where wavelet basis transforms are used as activation function.
A hand gesture recognition system provides a robust and innovative solution to nonverbal communication through human–computer interaction. Deep learning models have excellent potential for usage in recognition applications. To overcome related issues, most previous studies have proposed new model architectures or have fine-tuned pre-trained models. Furthermore, these studies relied on one standard dataset for both training and testing. Thus, the accuracy of these studies is reasonable. Unlike these works, the current study investigates two deep learning models with intermediate layers to recognize static hand gesture images. Both models were tested on different datasets, adjusted to suit the dataset, and then trained under different m
... Show MoreThe present research aims to design an electronic system based on cloud computing to develop electronic tasks for students of the University of Mosul. Achieving this goal required designing an electronic system that includes all theoretical information, applied procedures, instructions, orders for computer programs, and identifying its effectiveness in developing Electronic tasks for students of the University of Mosul. Accordingly, the researchers formulated three hypotheses related to the cognitive and performance aspects of the electronic tasks. To verify the research hypotheses, a sample of (91) students is intentionally chosen from the research community, represented by the students of the college of education for humanities and col
... Show More