Mobile-based human emotion recognition is very challenging subject, most of the approaches suggested and built in this field utilized various contexts that can be derived from the external sensors and the smartphone, but these approaches suffer from different obstacles and challenges. The proposed system integrated human speech signal and heart rate, in one system, to leverage the accuracy of the human emotion recognition. The proposed system is designed to recognize four human emotions; angry, happy, sad and normal. In this system, the smartphone is used to record user speech and send it to a server. The smartwatch, fixed on user wrist, is used to measure user heart rate while the user is speaking and send it, via Bluetooth, to the smartphone which in turn sends it to the server. At the server side, the speech features are extracted from the speech signal to be classified by neural network. To minimize the misclassification of the neural network, the user heart rate measurement is used to direct the extracted speech features to either excited (angry and happy) neural network or to the calm (sad and normal) neural network. In spite of the challenges associated with the system, the system achieved 96.49% for known speakers and 79.05% for unknown speakers
Electromyogram (EMG)-based Pattern Recognition (PR) systems for upper-limb prosthesis control provide promising ways to enable an intuitive control of the prostheses with multiple degrees of freedom and fast reaction times. However, the lack of robustness of the PR systems may limit their usability. In this paper, a novel adaptive time windowing framework is proposed to enhance the performance of the PR systems by focusing on their windowing and classification steps. The proposed framework estimates the output probabilities of each class and outputs a movement only if a decision with a probability above a certain threshold is achieved. Otherwise (i.e., all probability values are below the threshold), the window size of the EMG signa
... Show Moret:
The most famous thing a person does is talk. He loves and hates, and continues with it confirming relationships, and with it, too, comes out of disbelief into faith. Marry a word and separate with a word. He reaches the top of the heavens with a kind word, with which he will gain the pleasure of God, and the Lord of a word that the servant speaks to which God writes with our pleasure or throws him on his face in the fire. Emotions are inflamed, the United Nations is intensified with a word, and relations between states and war continue with a word.
What comes out of a person’s mouth is a translator that expresses the repository of his conscience and reveals the place of his bed, for it is evidence of
... Show MoreCloth simulation and animation has been the topic of research since the mid-80's in the field of computer graphics. Enforcing incompressible is very important in real time simulation. Although, there are great achievements in this regard, it still suffers from unnecessary time consumption in certain steps that is common in real time applications. This research develops a real-time cloth simulator for a virtual human character (VHC) with wearable clothing. This research achieves success in cloth simulation on the VHC through enhancing the position-based dynamics (PBD) framework by computing a series of positional constraints which implement constant densities. Also, the self-collision and collision wit
... Show MoreSeveral recent approaches focused on the developing of traditional systems to measure the costs to meet the new environmental requirements, including Attributes Based Costing (ABCII). It is method of accounting is based on measuring the costs according to the Attributes that the product is designed on this basis and according to achievement levels of all the Attribute of the product attributes. This research provides the knowledge foundations of this approach and its role in the market-oriented compared to the Activity based costing as shown in steps to be followed to apply for this Approach. The research problem in the attempt to reach the most accurate Approach in the measurement of the cost of products from th
... Show MoreFacial expressions are a term that expresses a group of movements of the facial fore muscles that is related to one's own human emotions. Human–computer interaction (HCI) has been considered as one of the most attractive and fastest-growing fields. Adding emotional expression’s recognition to expect the users’ feelings and emotional state can drastically improves HCI. This paper aims to demonstrate the three most important facial expressions (happiness, sadness, and surprise). It contains three stages; first, the preprocessing stage was performed to enhance the facial images. Second, the feature extraction stage depended on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) methods. Third, the recognition stage w
... Show MoreIn this work, the performance of the receiver in a quantum cryptography system based on BB84 protocol is scaled by calculating the Quantum Bit Error Rate (QBER) of the receiver. To apply this performance test, an optical setup was arranged and a circuit was designed and implemented to calculate the QBER. This electronic circuit is used to calculate the number of counts per second generated by the avalanche photodiodes set in the receiver. The calculated counts per second are used to calculate the QBER for the receiver that gives an indication for the performance of the receiver. Minimum QBER, 6%, was obtained with avalanche photodiode excess voltage equals to 2V and laser diode power of 3.16 nW at avalanche photodiode temperature of -10
... Show MoreResearchers are increasingly using multimodal biometrics to strengthen the security of biometric applications. In this study, a strong multimodal human identification model was developed to address the growing problem of spoofing attacks in biometric security systems. Through the use of metaheuristic optimization methods, such as the Genetic Algorithm(GA), Ant Colony Optimization(ACO), and Particle Swarm Optimization (PSO) for feature selection, this unique model incorporates three biometric modalities: face, iris, and fingerprint. Image pre-processing, feature extraction, critical image feature selection, and multibiometric recognition are the four main steps in the workflow of the system. To determine its performance, the model wa
... Show More