<span>Digital audio is required to transmit large sizes of audio information through the most common communication systems; in turn this leads to more challenges in both storage and archieving. In this paper, an efficient audio compressive scheme is proposed, it depends on combined transform coding scheme; it is consist of i) bi-orthogonal (tab 9/7) wavelet transform to decompose the audio signal into low & multi high sub-bands, ii) then the produced sub-bands passed through DCT to de-correlate the signal, iii) the product of the combined transform stage is passed through progressive hierarchical quantization, then traditional run-length encoding (RLE), iv) and finally LZW coding to generate the output mate bitstream. The measures Peak signal-to-noise ratio (PSNR) and compression ratio (CR) were used to conduct a comparative analysis for the performance of the whole system. Many audio test samples were utilized to test the performance behavior; the used samples have various sizes and vary in features. The simulation results appear the efficiency of these combined transforms when using LZW within the domain of data compression. The compression results are encouraging and show a remarkable reduction in audio file size with good fidelity.</span>
Recording an Electromyogram (EMG) signal is essential for diagnostic procedures like muscle health assessment and motor neurons control. The EMG signals have been used as a source of control for powered prosthetics to support people to accomplish their activities of daily living (ADLs). This work deals with studying different types of hand grips and finding their relationship with EMG activity. Five subjects carried out four functional movements (fine pinch, tripod grip and grip with the middle and thumb finger, as well as the power grip). Hand dynamometer has been used to record the EMG activity from three muscles namely; Flexor Carpi Radialis (FCR), Flexor Digitorum Superficialis (FDS), and Abductor Pollicis Brevis (ABP) with different
... Show MoreThis paper proposes a new approach to model and analyze erect posture, based on a spherical inverted pendulum which is used to mimic the body posture. The pendulum oscillates in two directions, [Formula: see text] and [Formula: see text], from which the mathematical model was derived and two torque components in oscillation directions were introduced. They are estimated using stabilometric data acquired by a foot pressure mapping system. The model was quantitatively investigated using data from 19 participants, who were first were classified into three groups, according to the foot arch-index. Stabilometric data were then collected and fed into the model to estimate the torque’s components. The components were statistically proce
... Show More
The apricot plant was washed, dried, and powdered after harvesting to produce a fine powder that was used in water treatment. created an alcoholic extract from the apricot plant using ethanol, which was then analysed using GC-MS, Fourier transform infrared spectroscopy, and ultraviolet-visible spectroscopy to identify the active components. Zinc nanoparticles were created using an alcoholic extract. FTIR, UV-Vis, SEM, EDX, and TEM are used to characterize zinc nanoparticles. Using a continuous processing procedure, zinc nanoparticles with apricot extract and powder were employed to clean polluted water. Firstly, 2 g of zinc nanoparticles were used with 20 ml of polluted water, and the results were Tetra 44% and Levo 32%; after
... Show MoreThis article aim to estimate the Return Stock Rate of the private banking sector, with two banks, by adopting a Partial Linear Model based on the Arbitrage Pricing Model (APT) theory, using Wavelet and Kernel Smoothers. The results have proved that the wavelet method is the best. Also, the results of the market portfolio impact and inflation rate have proved an adversely effectiveness on the rate of return, and direct impact of the money supply.
Due to the large population of motorway users in the country of Iraq, various approaches have been adopted to manage queues such as implementation of traffic lights, avoidance of illegal parking, amongst others. However, defaulters are recorded daily, hence the need to develop a mean of identifying these defaulters and bring them to book. This article discusses the development of an approach of recognizing Iraqi licence plates such that defaulters of queue management systems are identified. Multiple agencies worldwide have quickly and widely adopted the recognition of a vehicle license plate technology to expand their ability in investigative and security matters. License plate helps detect the vehicle's information automatically ra
... Show MoreData mining has the most important role in healthcare for discovering hidden relationships in big datasets, especially in breast cancer diagnostics, which is the most popular cause of death in the world. In this paper two algorithms are applied that are decision tree and K-Nearest Neighbour for diagnosing Breast Cancer Grad in order to reduce its risk on patients. In decision tree with feature selection, the Gini index gives an accuracy of %87.83, while with entropy, the feature selection gives an accuracy of %86.77. In both cases, Age appeared as the most effective parameter, particularly when Age<49.5. Whereas Ki67 appeared as a second effective parameter. Furthermore, K- Nearest Neighbor is based on the minimu
... Show MoreIn digital images, protecting sensitive visual information against unauthorized access is considered a critical issue; robust encryption methods are the best solution to preserve such information. This paper introduces a model designed to enhance the performance of the Tiny Encryption Algorithm (TEA) in encrypting images. Two approaches have been suggested for the image cipher process as a preprocessing step before applying the Tiny Encryption Algorithm (TEA). The step mentioned earlier aims to de-correlate and weaken adjacent pixel values as a preparation process before the encryption process. The first approach suggests an Affine transformation for image encryption at two layers, utilizing two different key sets for each layer. Th
... Show MoreText based-image clustering (TBIC) is an insufficient approach for clustering related web images. It is a challenging task to abstract the visual features of images with the support of textual information in a database. In content-based image clustering (CBIC), image data are clustered on the foundation of specific features like texture, colors, boundaries, shapes. In this paper, an effective CBIC) technique is presented, which uses texture and statistical features of the images. The statistical features or moments of colors (mean, skewness, standard deviation, kurtosis, and variance) are extracted from the images. These features are collected in a one dimension array, and then genetic algorithm (GA) is applied for image clustering.
... Show MoreWith the escalation of cybercriminal activities, the demand for forensic investigations into these crimeshas grown significantly. However, the concept of systematic pre-preparation for potential forensicexaminations during the software design phase, known as forensic readiness, has only recently gainedattention. Against the backdrop of surging urban crime rates, this study aims to conduct a rigorous andprecise analysis and forecast of crime rates in Los Angeles, employing advanced Artificial Intelligence(AI) technologies. This research amalgamates diverse datasets encompassing crime history, varioussocio-economic indicators, and geographical locations to attain a comprehensive understanding of howcrimes manifest within the city. Lev
... Show MoreNumeral recognition is considered an essential preliminary step for optical character recognition, document understanding, and others. Although several handwritten numeral recognition algorithms have been proposed so far, achieving adequate recognition accuracy and execution time remain challenging to date. In particular, recognition accuracy depends on the features extraction mechanism. As such, a fast and robust numeral recognition method is essential, which meets the desired accuracy by extracting the features efficiently while maintaining fast implementation time. Furthermore, to date most of the existing studies are focused on evaluating their methods based on clean environments, thus limiting understanding of their potential a
... Show More