Electrocardiogram (ECG) is an important physiological signal for cardiac disease diagnosis. With the increasing use of modern electrocardiogram monitoring devices that generate vast amount of data requiring huge storage capacity. In order to decrease storage costs or make ECG signals suitable and ready for transmission through common communication channels, the ECG data
volume must be reduced. So an effective data compression method is required. This paper presents an efficient technique for the compression of ECG signals. In this technique, different transforms have been used to compress the ECG signals. At first, a 1-D ECG data was segmented and aligned to a 2-D data array, then 2-D mixed transform was implemented to compress the ECG data in the 2-
D form. The compression algorithms were implemented and tested using multiwavelet, wavelet and slantlet transforms to form the proposed method based on mixed transforms. Then vector quantization technique was employed to extract the mixed transform coefficients. Some selected records from MIT/BIH arrhythmia database were tested contrastively and the performance of the
proposed methods was analyzed and evaluated using MATLAB package. Simulation results showed that the proposed methods gave a high compression ratio (CR) for the ECG signals comparing with other available methods. For example, the compression of one record (record 100) yielded CR of 24.4 associated with percent root mean square difference (PRD) of 2.56% was achieved.
Neural cryptography deals with the problem of “key exchange” between two neural networks by using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between two communicating parties ar eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process.
The penalized least square method is a popular method to deal with high dimensional data ,where the number of explanatory variables is large than the sample size . The properties of penalized least square method are given high prediction accuracy and making estimation and variables selection
At once. The penalized least square method gives a sparse model ,that meaning a model with small variables so that can be interpreted easily .The penalized least square is not robust ,that means very sensitive to the presence of outlying observation , to deal with this problem, we can used a robust loss function to get the robust penalized least square method ,and get robust penalized estimator and
... Show MoreAs we live in the era of the fourth technological revolution, it has become necessary to use artificial intelligence to generate electric power through sustainable solar energy, especially in Iraq and what it has gone through in terms of crises and what it suffers from a severe shortage of electric power because of the wars and calamities it went through. During that period of time, its impact is still evident in all aspects of daily life experienced by Iraqis because of the remnants of wars, siege, terrorism, wrong policies ruling before and later, regional interventions and their consequences, such as the destruction of electric power stations and the population increase, which must be followed by an increase in electric power stations,
... Show MoreFeature selection (FS) constitutes a series of processes used to decide which relevant features/attributes to include and which irrelevant features to exclude for predictive modeling. It is a crucial task that aids machine learning classifiers in reducing error rates, computation time, overfitting, and improving classification accuracy. It has demonstrated its efficacy in myriads of domains, ranging from its use for text classification (TC), text mining, and image recognition. While there are many traditional FS methods, recent research efforts have been devoted to applying metaheuristic algorithms as FS techniques for the TC task. However, there are few literature reviews concerning TC. Therefore, a comprehensive overview was systematicall
... Show MoreIn this paper, an adaptive integral Sliding Mode Control (SMC) is employed to control the speed of Three-Phase Induction Motor. The strategy used is the field oriented control as ac drive system. The SMC is used to estimate the frequency that required to generates three phase voltage of Space Vector Pulse Width Modulation (SVPWM) invertor . When the SMC is used with current controller, the quadratic component of stator current is estimated by the controller. Instead of using current controller, this paper proposed estimating the frequency of stator voltage since that the slip speed is function of the quadratic current . The simulation results of using the SMC showed that a good dynamic response can be obtained under load
... Show MoreInsurance actions has become a task of the vital foundations on which the international economy depends, where its presence helped in the development of economic resources in which human resource is considered the most important of these resources. Insurance companies play the biggest role in protecting this resource and minimizing the impact of the dangers that verify this condition.Human has worked hard to get rid of the dangers and its harm, and to devise many ways to prevent them. A risk management is considered within human’s creations in order to create a society with fewer negative risks impacts.
On this basis, th
... Show MoreTheoretical and experimental investigations of free convection through a cubic cavity with sinusoidal heat flux at bottom wall, the top wall is exposed to an outside ambient while the other walls are adiabatic saturated in porous medium had been approved in the present work. The range of Rayleigh number was and Darcy number values were . The theoretical part involved a numerical solution while the experimental part included a set of tests carried out to study the free convection heat transfer in a porous media (glass beads) for sinusoidal heat flux boundary condition. The investigation enclosed values of Rayleigh number (5845.6, 8801, 9456, 15034, 19188 and 22148) and angles of inclinations (0, 15, 30, 45 and 60 degree). The numerical an
... Show MorePermeability estimation is a vital step in reservoir engineering due to its effect on reservoir's characterization, planning for perforations, and economic efficiency of the reservoirs. The core and well-logging data are the main sources of permeability measuring and calculating respectively. There are multiple methods to predict permeability such as classic, empirical, and geostatistical methods. In this research, two statistical approaches have been applied and compared for permeability prediction: Multiple Linear Regression and Random Forest, given the (M) reservoir interval in the (BH) Oil Field in the northern part of Iraq. The dataset was separated into two subsets: Training and Testing in order to cross-validate the accuracy
... Show More