The control of prostheses and their complexities is one of the greatest challenges limiting wide amputees’ use of upper limb prostheses. The main challenges include the difficulty of extracting signals for controlling the prostheses, limited number of degrees of freedom (DoF), and cost-prohibitive for complex controlling systems. In this study, a real-time hybrid control system, based on electromyography (EMG) and voice commands (VC) is designed to render the prosthesis more dexterous with the ability to accomplish amputee’s daily activities proficiently. The voice and EMG systems were combined in three proposed hybrid strategies, each strategy had different number of movements depending on the combination protocol between voice and EMG control systems. Furthermore, the designed control system might serve a large number of amputees with different amputation levels, and since it has a reasonable cost and be easy to use. The performance of the proposed control system, based on hybrid strategies, was tested by intact-limbed and amputee participants for controlling the HANDi hand. The results showed that the proposed hybrid control system was robust, feasible, with an accuracy of 94%, 98%, and 99% for Strategies 1, 2, and 3, respectively. It was possible to specify the grip force applied to the prosthetic hand within three gripping forces. The amputees participated in this study preferred combination Strategy 3 where the voice and EMG are working concurrently, with an accuracy of 99%.
A hand gesture recognition system provides a robust and innovative solution to nonverbal communication through human–computer interaction. Deep learning models have excellent potential for usage in recognition applications. To overcome related issues, most previous studies have proposed new model architectures or have fine-tuned pre-trained models. Furthermore, these studies relied on one standard dataset for both training and testing. Thus, the accuracy of these studies is reasonable. Unlike these works, the current study investigates two deep learning models with intermediate layers to recognize static hand gesture images. Both models were tested on different datasets, adjusted to suit the dataset, and then trained under different m
... Show MoreSurface electromyography (sEMG) and accelerometer (Acc) signals play crucial roles in controlling prosthetic and upper limb orthotic devices, as well as in assessing electrical muscle activity for various biomedical engineering and rehabilitation applications. In this study, an advanced discrimination system is proposed for the identification of seven distinct shoulder girdle motions, aimed at improving prosthesis control. Feature extraction from Time-Dependent Power Spectrum Descriptors (TDPSD) is employed to enhance motion recognition. Subsequently, the Spectral Regression (SR) method is utilized to reduce the dimensionality of the extracted features. A comparative analysis is conducted between the Linear Discriminant Analysis (LDA) class
... Show MoreLeap Motion Controller (LMC) is a gesture sensor consists of three infrared light emitters and two infrared stereo cameras as tracking sensors. LMC translates hand movements into graphical data that are used in a variety of applications such as virtual/augmented reality and object movements control. In this work, we intend to control the movements of a prosthetic hand via (LMC) in which fingers are flexed or extended in response to hand movements. This will be carried out by passing in the data from the Leap Motion to a processing unit that processes the raw data by an open-source package (Processing i3) in order to control five servo motors using a micro-controller board. In addition, haptic setup is proposed using force sensors (F
... Show MoreMobile-based human emotion recognition is very challenging subject, most of the approaches suggested and built in this field utilized various contexts that can be derived from the external sensors and the smartphone, but these approaches suffer from different obstacles and challenges. The proposed system integrated human speech signal and heart rate, in one system, to leverage the accuracy of the human emotion recognition. The proposed system is designed to recognize four human emotions; angry, happy, sad and normal. In this system, the smartphone is used to record user speech and send it to a server. The smartwatch, fixed on user wrist, is used to measure user heart rate while the user is speaking and send it, via Bluetooth,
... Show More