Amputation of the upper limb significantly hinders the ability of patients to perform activities of daily living. To address this challenge, this paper introduces a novel approach that combines non-invasive methods, specifically Electroencephalography (EEG) and Electromyography (EMG) signals, with advanced machine learning techniques to recognize upper limb movements. The objective is to improve the control and functionality of prosthetic upper limbs through effective pattern recognition. The proposed methodology involves the fusion of EMG and EEG signals, which are processed using time-frequency domain feature extraction techniques. This enables the classification of seven distinct hand and wrist movements. The experiments conducted in this study utilized the Binary Grey Wolf Optimization (BGWO) algorithm to select optimal features for the proposed classification model. The results demonstrate promising outcomes, with an average classification accuracy of 93.6% for three amputees and five individuals with intact limbs. The accuracy achieved in classifying the seven types of hand and wrist movements further validates the effectiveness of the proposed approach. By offering a non-invasive and reliable means of recognizing upper limb movements, this research represents a significant step forward in biotechnical engineering for upper limb amputees. The findings hold considerable potential for enhancing the control and usability of prosthetic devices, ultimately contributing to the overall quality of life for individuals with upper limb amputations.
<span>Digital audio is required to transmit large sizes of audio information through the most common communication systems; in turn this leads to more challenges in both storage and archieving. In this paper, an efficient audio compressive scheme is proposed, it depends on combined transform coding scheme; it is consist of i) bi-orthogonal (tab 9/7) wavelet transform to decompose the audio signal into low & multi high sub-bands, ii) then the produced sub-bands passed through DCT to de-correlate the signal, iii) the product of the combined transform stage is passed through progressive hierarchical quantization, then traditional run-length encoding (RLE), iv) and finally LZW coding to generate the output mate bitstream.
... Show MoreIn this study, a fast block matching search algorithm based on blocks' descriptors and multilevel blocks filtering is introduced. The used descriptors are the mean and a set of centralized low order moments. Hierarchal filtering and MAE similarity measure were adopted to nominate the best similar blocks lay within the pool of neighbor blocks. As next step to blocks nomination the similarity of the mean and moments is used to classify the nominated blocks and put them in one of three sub-pools, each one represents certain nomination priority level (i.e., most, less & least level). The main reason of the introducing nomination and classification steps is a significant reduction in the number of matching instances of the pixels belong to the c
... Show MoreThe aim of this research is to compare traditional and modern methods to obtain the optimal solution using dynamic programming and intelligent algorithms to solve the problems of project management.
It shows the possible ways in which these problems can be addressed, drawing on a schedule of interrelated and sequential activities And clarifies the relationships between the activities to determine the beginning and end of each activity and determine the duration and cost of the total project and estimate the times used by each activity and determine the objectives sought by the project through planning, implementation and monitoring to maintain the budget assessed
... Show MoreA new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show MoreThe thermal method was used to produce silicoaluminophosphate (SAPO-11) with different amounts of carbon nanotubes (CNT). XRD, nitrogen adsorption-desorption, SEM, AFM, and FTIR were used to characterize the prepared catalyst. It was discovered that adding CNT increased the crystallinity of the synthesize SAPO-11 at all the temperatures which studied, wile the maximum surface area was 179.54 m2/g obtained at 190°C with 7.5 percent of CNT with a pore volume of 0.317 cm3/g ,and with nano-particles with average particle diameter of 24.8 nm, while the final molar composition of the prepared SAPO-11 was (Al2O3:0.93P2O5:0.414SiO2).
To maintain the security and integrity of data, with the growth of the Internet and the increasing prevalence of transmission channels, it is necessary to strengthen security and develop several algorithms. The substitution scheme is the Playfair cipher. The traditional Playfair scheme uses a small 5*5 matrix containing only uppercase letters, making it vulnerable to hackers and cryptanalysis. In this study, a new encryption and decryption approach is proposed to enhance the resistance of the Playfair cipher. For this purpose, the development of symmetric cryptography based on shared secrets is desired. The proposed Playfair method uses a 5*5 keyword matrix for English and a 6*6 keyword matrix for Arabic to encrypt the alphabets of
... Show MoreThe biosorption of Pb (II), Cd (II), and Hg (II) from simulated aqueous solutions using baker’s yeast biomass was investigated. Batch type experiments were carried out to find the equilibrium isotherm data for each component (single, binary, and ternary), and the adsorption rate constants. Kinetics pseudo-first and second order rate models applied to the adsorption data to estimate the rate constant for each solute, the results showed that the Cd (II), Pb (II), and Hg (II) uptake process followed the pseudo-second order rate model with (R2) 0.963, 0.979, and 0.960 respectively. The equilibrium isotherm data were fitted with five theoretical models. Langmuir model provides the best fitting for the experimental results with (R2) 0.992, 0
... Show More