Abstract—The upper limb amputation exerts a significant burden on the amputee, limiting their ability to perform everyday activities, and degrading their quality of life. Amputee patients’ quality of life can be improved if they have natural control over their prosthetic hands. Among the biological signals, most commonly used to predict upper limb motor intentions, surface electromyography (sEMG), and axial acceleration sensor signals are essential components of shoulder-level upper limb prosthetic hand control systems. In this work, a pattern recognition system is proposed to create a plan for categorizing high-level upper limb prostheses in seven various types of shoulder girdle motions. Thus, combining seven feature groups, w
... Show MoreImage recognition is one of the most important applications of information processing, in this paper; a comparison between 3-level techniques based image recognition has been achieved, using discrete wavelet (DWT) and stationary wavelet transforms (SWT), stationary-stationary-stationary (sss), stationary-stationary-wavelet (ssw), stationary-wavelet-stationary (sws), stationary-wavelet-wavelet (sww), wavelet-stationary- stationary (wss), wavelet-stationary-wavelet (wsw), wavelet-wavelet-stationary (wws) and wavelet-wavelet-wavelet (www). A comparison between these techniques has been implemented. according to the peak signal to noise ratio (PSNR), root mean square error (RMSE), compression ratio (CR) and the coding noise e (n) of each third
... Show MoreEmotion recognition has important applications in human-computer interaction. Various sources such as facial expressions and speech have been considered for interpreting human emotions. The aim of this paper is to develop an emotion recognition system from facial expressions and speech using a hybrid of machine-learning algorithms in order to enhance the overall performance of human computer communication. For facial emotion recognition, a deep convolutional neural network is used for feature extraction and classification, whereas for speech emotion recognition, the zero-crossing rate, mean, standard deviation and mel frequency cepstral coefficient features are extracted. The extracted features are then fed to a random forest classifier. In
... Show MoreIn order to understand the effect of the number of piles (N), the history response of dynamic pile load in piled raft system and deflection time history of piled raft under repeated impact load applied on the center of piled raft resting on loose sand, laboratory model tests were conducted on small-scale models. The results of experimental work are found to be dynamic load increase with increase height of drop, the measured repeated dynamic load time history on the center of piled raft was close approximately to three a half sine wave shape with small duration in about (0.015 Sec). The maximum peak of impact loads occurs in pile and deflection time history occur after at the time of the peak repeated impact loads, dynamic pile load
... Show MoreMobile-based human emotion recognition is very challenging subject, most of the approaches suggested and built in this field utilized various contexts that can be derived from the external sensors and the smartphone, but these approaches suffer from different obstacles and challenges. The proposed system integrated human speech signal and heart rate, in one system, to leverage the accuracy of the human emotion recognition. The proposed system is designed to recognize four human emotions; angry, happy, sad and normal. In this system, the smartphone is used to record user speech and send it to a server. The smartwatch, fixed on user wrist, is used to measure user heart rate while the user is speaking and send it, via Bluetooth,
... Show MoreActivity recognition (AR) is a new interesting and challenging research area with many applications (e.g. healthcare, security, and event detection). Basically, activity recognition (e.g. identifying user’s physical activity) is more likely to be considered as a classification problem. In this paper, a combination of 7 classification methods is employed and experimented on accelerometer data collected via smartphones, and compared for best performance. The dataset is collected from 59 individuals who performed 6 different activities (i.e. walk, jog, sit, stand, upstairs, and downstairs). The total number of dataset instances is 5418 with 46 labeled features. The results show that the proposed method of ensemble boost-based classif
... Show MoreAbstract
The objective of image fusion is to merge multiple sources of images together in such a way that the final representation contains higher amount of useful information than any input one.. In this paper, a weighted average fusion method is proposed. It depends on using weights that are extracted from source images using counterlet transform. The extraction method is done by making the approximated transformed coefficients equal to zero, then taking the inverse counterlet transform to get the details of the images to be fused. The performance of the proposed algorithm has been verified on several grey scale and color test images, and compared with some present methods.
... Show MoreFace recognition is required in various applications, and major progress has been witnessed in this area. Many face recognition algorithms have been proposed thus far; however, achieving high recognition accuracy and low execution time remains a challenge. In this work, a new scheme for face recognition is presented using hybrid orthogonal polynomials to extract features. The embedded image kernel technique is used to decrease the complexity of feature extraction, then a support vector machine is adopted to classify these features. Moreover, a fast-overlapping block processing algorithm for feature extraction is used to reduce the computation time. Extensive evaluation of the proposed method was carried out on two different face ima
... Show MoreFacial emotion recognition finds many real applications in the daily life like human robot interaction, eLearning, healthcare, customer services etc. The task of facial emotion recognition is not easy due to the difficulty in determining the effective feature set that can recognize the emotion conveyed within the facial expression accurately. Graph mining techniques are exploited in this paper to solve facial emotion recognition problem. After determining positions of facial landmarks in face region, twelve different graphs are constructed using four facial components to serve as a source for sub-graphs mining stage using gSpan algorithm. In each group, the discriminative set of sub-graphs are selected and fed to Deep Belief Network (DBN) f
... Show More