Traumatic spinal cord injury is a serious neurological disorder. Patients experience a plethora of symptoms that can be attributed to the nerve fiber tracts that are compromised. This includes limb weakness, sensory impairment, and truncal instability, as well as a variety of autonomic abnormalities. This article will discuss how machine learning classification can be used to characterize the initial impairment and subsequent recovery of electromyography signals in an non-human primate model of traumatic spinal cord injury. The ultimate objective is to identify potential treatments for traumatic spinal cord injury. This work focuses specifically on finding a suitable classifier that differentiates between two distinct experimental stages (pre-and post-lesion) using electromyography signals. Eight time-domain features were extracted from the collected electromyography data. To overcome the imbalanced dataset issue, synthetic minority oversampling technique was applied. Different ML classification techniques were applied including multilayer perceptron, support vector machine, K-nearest neighbors, and radial basis function network; then their performances were compared. A confusion matrix and five other statistical metrics (sensitivity, specificity, precision, accuracy, and F-measure) were used to evaluate the performance of the generated classifiers. The results showed that the best classifier for the left- and right-side data is the multilayer perceptron with a total F-measure of 79.5% and 86.0% for the left and right sides, respectively. This work will help to build a reliable classifier that can differentiate between these two phases by utilizing some extracted time-domain electromyography features.
The aim of the research is to estimate the hidden population. Here، the number of drug users in Baghdad was calculated for the male age group (15-60) years old ، based on the Bayesian models. These models are used to treat some of the bias in the Killworth method Accredited in many countries of the world.
Four models were used: random degree، Barrier effects، Transmission bias، the first model being random، an extension of the Killworth model، adding random effects such as variance and uncertainty Through the size of the personal network، and when expanded by adding the fact that the respondents have different tendencies، the mixture of non-random variables with random to produce
... Show MoreAdvertisement is one of the Media most efficient persuasive communicative activities designed to marketing different ideas and products with the aim of influencing consumers' perception of goods and services. The present study sheds light on the most prominent rhetorical devices that constitute the persuasive structure of the Hebrew advertisements published in various media outlet. The study is conducted by means of analyzing the linguistic structure of the advertising texts and according to the analytic and descriptive approach to know the characteristics and the functions of the oratorical devices used in the advertising industry. The research elucidates that most of the advertisements are written in slang language, and this is due to th
... Show MoreBackground: Errors of horizontal condylar inclinations and Bennett angles had largely affected the articulation of teeth and the pathways of cusps. The aim of this study was to estimate and compare between the horizontal condylar (protrusive) angles and Bennett angles of full mouth rehabilitation patients using two different articulator systems. Materials and Methods: Protrusive angles and Bennett angles of 50 adult males and females Iraqi TMD-free full mouth rehabilitation patients were estimated by using two different articulator systems. Arbitrary hinge axis location followed by protrusive angles and Bennett angles, estimation was done by a semiadjustable articulator system. A fully adjustable articulator system was utilized to locate th
... Show MoreWe have provided in this research model multi assignment with fuzzy function goal has been to build programming model is correct Integer Programming fogging after removing the case from the objective function data and convert it to real data .Pascal triangular graded mean using Pascal way to the center of the triangular.
The data processing to get rid of the case fogging which is surrounded by using an Excel 2007 either model multi assignment has been used program LNDO to reach the optimal solution, which represents less than what can be from time to accomplish a number of tasks by the number of employees on the specific amount of the Internet, also included a search on some of the
... Show MoreEncryption of data is translating data to another shape or symbol which enables people only with an access to the secret key or a password that can read it. The data which are encrypted are generally referred to as cipher text, while data which are unencrypted are known plain text. Entropy can be used as a measure which gives the number of bits that are needed for coding the data of an image. As the values of pixel within an image are dispensed through further gray-levels, the entropy increases. The aim of this research is to compare between CAST-128 with proposed adaptive key and RSA encryption methods for video frames to determine the more accurate method with highest entropy. The first method is achieved by applying the "CAST-128" and
... Show MoreIn this paper, a handwritten digit classification system is proposed based on the Discrete Wavelet Transform and Spike Neural Network. The system consists of three stages. The first stage is for preprocessing the data and the second stage is for feature extraction, which is based on Discrete Wavelet Transform (DWT). The third stage is for classification and is based on a Spiking Neural Network (SNN). To evaluate the system, two standard databases are used: the MADBase database and the MNIST database. The proposed system achieved a high classification accuracy rate with 99.1% for the MADBase database and 99.9% for the MNIST database
This work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show MoreDeep learning convolution neural network has been widely used to recognize or classify voice. Various techniques have been used together with convolution neural network to prepare voice data before the training process in developing the classification model. However, not all model can produce good classification accuracy as there are many types of voice or speech. Classification of Arabic alphabet pronunciation is a one of the types of voice and accurate pronunciation is required in the learning of the Qur’an reading. Thus, the technique to process the pronunciation and training of the processed data requires specific approach. To overcome this issue, a method based on padding and deep learning convolution neural network is proposed to
... Show MoreText categorization refers to the process of grouping text or documents into classes or categories according to their content. Text categorization process consists of three phases which are: preprocessing, feature extraction and classification. In comparison to the English language, just few studies have been done to categorize and classify the Arabic language. For a variety of applications, such as text classification and clustering, Arabic text representation is a difficult task because Arabic language is noted for its richness, diversity, and complicated morphology. This paper presents a comprehensive analysis and a comparison for researchers in the last five years based on the dataset, year, algorithms and the accuracy th
... Show More