Electrocardiogram (ECG) is an important physiological signal for cardiac disease diagnosis. With the increasing use of modern electrocardiogram monitoring devices that generate vast amount of data requiring huge storage capacity. In order to decrease storage costs or make ECG signals suitable and ready for transmission through common communication channels, the ECG data
volume must be reduced. So an effective data compression method is required. This paper presents an efficient technique for the compression of ECG signals. In this technique, different transforms have been used to compress the ECG signals. At first, a 1-D ECG data was segmented and aligned to a 2-D data array, then 2-D mixed transform was implemented to compress the ECG data in the 2-
D form. The compression algorithms were implemented and tested using multiwavelet, wavelet and slantlet transforms to form the proposed method based on mixed transforms. Then vector quantization technique was employed to extract the mixed transform coefficients. Some selected records from MIT/BIH arrhythmia database were tested contrastively and the performance of the
proposed methods was analyzed and evaluated using MATLAB package. Simulation results showed that the proposed methods gave a high compression ratio (CR) for the ECG signals comparing with other available methods. For example, the compression of one record (record 100) yielded CR of 24.4 associated with percent root mean square difference (PRD) of 2.56% was achieved.
The problems of modeling the signal and dispersion properties of a second order recursive section in the integer parameter space are considered. The formulation and solution of the section synthesis problem by selective and dispersive criteria using the methods of integer nonlinear mathematical programming are given. The availability of obtaining both positive and negative frequency dispersion of a signal in a recursive section, as well as the possibility of minimizing dispersion distortions in the system, is shown.
Care and attention to the structure in the sixties of the last century replaced the mark, and if the structure of Ms. pampered in research and studies, it has become the mark is also a spoiled lady .. But the relationship between the structure and the mark was not a break and break, but the relationship of integration, His themes are structural analysis, and these are intellectual themes that can not be surpassed in contemporary research, especially since semiotics have emerged from the linguistic inflection.
We have tried to distinguish between text and speech, which is a daunting task, as it seems that whenever the difference between them is clear and clear, we come back to wonder whether the text is the same discourse, and is
... Show MoreThis work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show MoreDuring the two last decades ago, audio compression becomes the topic of many types of research due to the importance of this field which reflecting on the storage capacity and the transmission requirement. The rapid development of the computer industry increases the demand for audio data with high quality and accordingly, there is great importance for the development of audio compression technologies, lossy and lossless are the two categories of compression. This paper aims to review the techniques of the lossy audio compression methods, summarize the importance and the uses of each method.
We explore the transform coefficients of fractal and exploit new method to improve the compression capabilities of these schemes. In most of the standard encoder/ decoder systems the quantization/ de-quantization managed as a separate step, here we introduce new way (method) to work (managed) simultaneously. Additional compression is achieved by this method with high image quality as you will see later.
Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the origin
... Show MoreThis study was designed to evaluate the role of single session autologous facial fat grafting in correcting facial asymmetries after mixing it with platelet-rich fibrin (PRF) and injecting them into rich vascular facial muscular plane.
Fifteen patients (12 females and 3 males) with age ranging from 18 years to 40 years were included in this study and followed up during 6 months, all the patients were treated in the Al-Shaheed Ghazi Al-Hariri for specialized surgeries hospital (Medical City, Baghdad, Iraq).
Auto