Upper limb amputation is a condition that severely limits the amputee’s movement. Patients who have lost the use of one or more of their upper extremities have difficulty performing activities of daily living. To help improve the control of upper limb prosthesis with pattern recognition, non-invasive approaches (EEG and EMG signals) is proposed in this paper and are integrated with machine learning techniques to recognize the upper-limb motions of subjects. EMG and EEG signals are combined, and five features are utilized to classify seven hand movements such as (wrist flexion (WF), outward part of the wrist (WE), hand open (HO), hand close (HC), pronation (PRO), supination (SUP), and rest (RST)). Experiments demonstrate that using mean absolute value (MAV), waveform length (WL), Wilson Amplitude (WAMP), Sine Slope Changes (SSC), and Cardinality features of the proposed algorithm achieves a classification accuracy of 89.6% when classifying seven distinct types of hand and wrist movement. Index Terms— Human Robot Interaction, Bio-signals Analysis, LDA classifier.
Surface Plasmon Resonance (SPR)-based plastic optical fiber sensor for estimating the concentration and refractive index of sugar in human blood serum. The sensor is fabricated by a small part (10mm) of optical fiber in the middle is embedded in a resin block and then the polishing process is done, after that it is deposited with about (40nm) thickness of gold metal. The blood serum is placed on gold coated core of an Optical grade plastic optical fiber of 980 µm core diameter.
A hand gesture recognition system provides a robust and innovative solution to nonverbal communication through human–computer interaction. Deep learning models have excellent potential for usage in recognition applications. To overcome related issues, most previous studies have proposed new model architectures or have fine-tuned pre-trained models. Furthermore, these studies relied on one standard dataset for both training and testing. Thus, the accuracy of these studies is reasonable. Unlike these works, the current study investigates two deep learning models with intermediate layers to recognize static hand gesture images. Both models were tested on different datasets, adjusted to suit the dataset, and then trained under different m
... Show MoreThis paper presents a comparison between the electroencephalogram (EEG) channels during scoliosis correction surgeries. Surgeons use many hand tools and electronic devices that directly affect the EEG channels. These noises do not affect the EEG channels uniformly. This research provides a complete system to find the least affected channel by the noise. The presented system consists of five stages: filtering, wavelet decomposing (Level 4), processing the signal bands using four different criteria (mean, energy, entropy and standard deviation), finding the useful channel according to the criteria’s value and, finally, generating a combinational signal from Channels 1 and 2. Experimentally, two channels of EEG data were recorded fro
... Show MoreA bolted–welded hybrid demountable shear connector for use in deconstructable steel–concrete composite buildings and bridges was proposed. The hybrid connector consisted of a partially threaded stud, which was welded on the flange of a steel section, and a machined steel tube with compatible geometry, which was bolted on the stud. Four standard pushout tests according to Eurocode 4 were carried out to assess the shear performance of the hybrid connector. The experimental results show that the initial stiffness, shear resistance, and slip capacity of the proposed connector were higher than those of traditional welded studs. The hybrid connector was a ductile connector, according to Eurocode 4, with slip capacity higher than 6 mm. A nonli
... Show MoreEmotion could be expressed through unimodal social behaviour’s or bimodal or it could be expressed through multimodal. This survey describes the background of facial emotion recognition and surveys the emotion recognition using visual modality. Some publicly available datasets are covered for performance evaluation. A summary of some of the research efforts to classify emotion using visual modality for the last five years from 2013 to 2018 is given in a tabular form.
The Braille Recognition System is the process of capturing a Braille document image and turning its content into its equivalent natural language characters. The Braille Recognition System's cell transcription and Braille cell recognition are the two basic phases that follow one another. The Braille Recognition System is a technique for locating and recognizing a Braille document stored as an image, such as a jpeg, jpg, tiff, or gif image, and converting the text into a machine-readable format, such as a text file. BCR translates an image's pixel representation into its character representation. As workers at visually impaired schools and institutes, we profit from Braille recognition in a variety of ways. The Braille Recognition S
... Show MoreRG Majeed, AS Ahmed, Jornal of Al-Muthanna for Agricultural Sciences, 2023