Upper limb amputation is a condition that severely limits the amputee’s movement. Patients who have lost the use of one or more of their upper extremities have difficulty performing activities of daily living. To help improve the control of upper limb prosthesis with pattern recognition, non-invasive approaches (EEG and EMG signals) is proposed in this paper and are integrated with machine learning techniques to recognize the upper-limb motions of subjects. EMG and EEG signals are combined, and five features are utilized to classify seven hand movements such as (wrist flexion (WF), outward part of the wrist (WE), hand open (HO), hand close (HC), pronation (PRO), supination (SUP), and rest (RST)). Experiments demonstrate that using mean absolute value (MAV), waveform length (WL), Wilson Amplitude (WAMP), Sine Slope Changes (SSC), and Cardinality features of the proposed algorithm achieves a classification accuracy of 89.6% when classifying seven distinct types of hand and wrist movement. Index Terms— Human Robot Interaction, Bio-signals Analysis, LDA classifier.
Amputation of the upper limb significantly hinders the ability of patients to perform activities of daily living. To address this challenge, this paper introduces a novel approach that combines non-invasive methods, specifically Electroencephalography (EEG) and Electromyography (EMG) signals, with advanced machine learning techniques to recognize upper limb movements. The objective is to improve the control and functionality of prosthetic upper limbs through effective pattern recognition. The proposed methodology involves the fusion of EMG and EEG signals, which are processed using time-frequency domain feature extraction techniques. This enables the classification of seven distinct hand and wrist movements. The experiments conducte
... Show MoreWith the recent developments of technology and the advances in artificial intelligent and machine learning techniques, it becomes possible for the robot to acquire and show the emotions as a part of Human-Robot Interaction (HRI). An emotional robot can recognize the emotional states of humans so that it will be able to interact more naturally with its human counterpart in different environments. In this article, a survey on emotion recognition for HRI systems has been presented. The survey aims to achieve two objectives. Firstly, it aims to discuss the main challenges that face researchers when building emotional HRI systems. Secondly, it seeks to identify sensing channels that can be used to detect emotions and provides a literature review
... Show MoreThe aim of human lower limb rehabilitation robot is to regain the ability of motion and to strengthen the weak muscles. This paper proposes the design of a force-position control for a four Degree Of Freedom (4-DOF) lower limb wearable rehabilitation robot. This robot consists of a hip, knee and ankle joints to enable the patient for motion and turn in both directions. The joints are actuated by Pneumatic Muscles Actuators (PMAs). The PMAs have very great potential in medical applications because the similarity to biological muscles. Force-Position control incorporating a Takagi-Sugeno-Kang- three- Proportional-Derivative like Fuzzy Logic (TSK-3-PD) Controllers for position control and three-Proportional (3-P) controllers for force contr
... Show More<span lang="EN-US">The use of bio-signals analysis in human-robot interaction is rapidly increasing. There is an urgent demand for it in various applications, including health care, rehabilitation, research, technology, and manufacturing. Despite several state-of-the-art bio-signals analyses in human-robot interaction (HRI) research, it is unclear which one is the best. In this paper, the following topics will be discussed: robotic systems should be given priority in the rehabilitation and aid of amputees and disabled people; second, domains of feature extraction approaches now in use, which are divided into three main sections (time, frequency, and time-frequency). The various domains will be discussed, then a discussion of e
... Show MoreIn this paper, we implement and examine a Simulink model with electroencephalography (EEG) to control many actuators based on brain waves. This will be in great demand since it will be useful for certain individuals who are unable to access some control units that need direct contact with humans. In the beginning, ten volunteers of a wide range of (20-66) participated in this study, and the statistical measurements were first calculated for all eight channels. Then the number of channels was reduced by half according to the activation of brain regions within the utilized protocol and the processing time also decreased. Consequently, four of the participants (three males and one female) were chosen to examine the Simulink model during di
... Show MoreIn this paper, we implement and examine a Simulink model with electroencephalography (EEG) to control many actuators based on brain waves. This will be in great demand since it will be useful for certain individuals who are unable to access some control units that need direct contact with humans. In the beginning, ten volunteers of a wide range of (20-66) participated in this study, and the statistical measurements were first calculated for all eight channels. Then the number of channels was reduced by half according to the activation of brain regions within the utilized protocol and the processing time also decreased. Consequently, four of the participants (three males and one female) were chosen to examine the Simulink model duri
... Show MoreEmotion recognition has important applications in human-computer interaction. Various sources such as facial expressions and speech have been considered for interpreting human emotions. The aim of this paper is to develop an emotion recognition system from facial expressions and speech using a hybrid of machine-learning algorithms in order to enhance the overall performance of human computer communication. For facial emotion recognition, a deep convolutional neural network is used for feature extraction and classification, whereas for speech emotion recognition, the zero-crossing rate, mean, standard deviation and mel frequency cepstral coefficient features are extracted. The extracted features are then fed to a random forest classifier. In
... Show More