A new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of the compressed signal relative to the size of the uncompressed signal. The proposed algorithms where fulfilled with the use of Matlab package
The aim of this study is for testing the applicability of Ramamoorthy and Murphy method for identification of predominant pore fluid type, in Middle Eastern carbonate reservoir, by analyzing the dynamic elastic properties derived from the sonic log. and involving the results of Souder, for testing the same method in chalk reservoir in the North Sea region. Mishrif formation in Garraf oilfield in southern Iraq was handled in this study, utilizing a slightly-deviated well data, these data include open-hole full-set logs, where, the sonic log composed of shear and compression modes, and geologic description to check the results. The Geolog software is used to make the conventional interpretation of porosity, lithology, and saturation. Also,
... Show MoreLinear programming currently occupies a prominent position in various fields and has wide applications, as its importance lies in being a means of studying the behavior of a large number of systems as well. It is also the simplest and easiest type of models that can be created to address industrial, commercial, military and other dilemmas. Through which to obtain the optimal quantitative value. In this research, we dealt with the post optimality solution, or what is known as sensitivity analysis, using the principle of shadow prices. The scientific solution to any problem is not a complete solution once the optimal solution is reached. Any change in the values of the model constants or what is known as the inputs of the model that will chan
... Show MoreSimulation experiments are a means of solving in many fields, and it is the process of designing a model of the real system in order to follow it and identify its behavior through certain models and formulas written according to a repeating software style with a number of iterations. The aim of this study is to build a model that deals with the behavior suffering from the state of (heteroskedasticity) by studying the models (APGARCH & NAGARCH) using (Gaussian) and (Non-Gaussian) distributions for different sample sizes (500,1000,1500,2000) through the stage of time series analysis (identification , estimation, diagnostic checking and prediction). The data was generated using the estimations of the parameters resulting f
... Show MoreIn this study, the quality assurance of the linear accelerator available at the Baghdad Center for Radiation Therapy and Nuclear Medicine was verified using Star Track and Perspex. The study was established from August to December 2018. This study showed that there was an acceptable variation in the dose output of the linear accelerator. This variation was ±2% and it was within the permissible range according to the recommendations of the manufacturer of the accelerator (Elkta).
In this paper, a compression system with high synthetic architect is introduced, it is based on wavelet transform, polynomial representation and quadtree coding. The bio-orthogonal (tap 9/7) wavelet transform is used to decompose the image signal, and 2D polynomial representation is utilized to prune the existing high scale variation of image signal. Quantization with quadtree coding are followed by shift coding are applied to compress the detail band and the residue part of approximation subband. The test results indicate that the introduced system is simple and fast and it leads to better compression gain in comparison with the case of using first order polynomial approximation.
Thin films of CuPc of various thicknesses (150,300 and 450) nm have been deposited using pulsed laser deposition technique at room temperature. The study showed that the spectra of the optical absorption of the thin films of the CuPc are two bands of absorption one in the visible region at about 635 nm, referred to as Q-band, and the second in ultra-violet region where B-band is located at 330 nm. CuPc thin films were found to have direct band gap with values around (1.81 and 3.14 (eV respectively. The vibrational studies were carried out using Fourier transform infrared spectroscopy (FT-IR). Finally, From open and closed aperture Z-scan data non-linear absorption coefficient and non-linear refractive index have been calculated res
... Show MoreThe electrocardiogram (ECG) is the recording of the electrical potential of the heart versus time. The analysis of ECG signals has been widely used in cardiac pathology to detect heart disease. The ECGs are non-stationary signals which are often contaminated by different types of noises from different sources. In this study, simulated noise models were proposed for the power-line interference (PLI), electromyogram (EMG) noise, base line wander (BW), white Gaussian noise (WGN) and composite noise. For suppressing noises and extracting the efficient morphology of an ECG signal, various processing techniques have been recently proposed. In this paper, wavelet transform (WT) is performed for noisy ECG signals. The graphical user interface (GUI)
... Show MoreAbstract The wavelet shrink estimator is an attractive technique when estimating the nonparametric regression functions, but it is very sensitive in the case of a correlation in errors. In this research, a polynomial model of low degree was used for the purpose of addressing the boundary problem in the wavelet reduction in addition to using flexible threshold values in the case of Correlation in errors as it deals with those transactions at each level separately, unlike the comprehensive threshold values that deal with all levels simultaneously, as (Visushrink) methods, (False Discovery Rate) method, (Improvement Thresholding) and (Sureshrink method), as the study was conducted on real monthly data represented in the rates of theft crimes f
... Show More