Home New Trends in Information and Communications Technology Applications Conference paper Audio Compression Using Transform Coding with LZW and Double Shift Coding Zainab J. Ahmed & Loay E. George Conference paper First Online: 11 January 2022 126 Accesses Part of the Communications in Computer and Information Science book series (CCIS,volume 1511) Abstract The need for audio compression is still a vital issue, because of its significance in reducing the data size of one of the most common digital media that is exchanged between distant parties. In this paper, the efficiencies of two audio compression modules were investigated; the first module is based on discrete cosine transform and the second module is based on discrete wavelet transform. The proposed audio compression system consists of the following steps: (1) load digital audio data, (2) transformation (i.e., using bi-orthogonal wavelet or discrete cosine transform) to decompose the audio signal, (3) quantization (depend on the used transform), (4) quantization of the quantized data that separated into two sequence vectors; runs and non-zeroes decomposition to apply the run length to reduce the long-run sequence. Each resulted vector is passed into the entropy encoder technique to implement a compression process. In this paper, two entropy encoders are used; the first one is the lossless compression method LZW and the second one is an advanced version for the traditional shift coding method called the double shift coding method. The proposed system performance is analyzed using distinct audio samples of different sizes and characteristics with various audio signal parameters. The performance of the compression system is evaluated using Peak Signal to Noise Ratio and Compression Ratio. The outcomes of audio samples show that the system is simple, fast and it causes better compression gain. The results show that the DSC encoding time is less than the LZW encoding time.
The proliferation of many editing programs based on artificial intelligence techniques has contributed to the emergence of deepfake technology. Deepfakes are committed to fabricating and falsifying facts by making a person do actions or say words that he never did or said. So that developing an algorithm for deepfakes detection is very important to discriminate real from fake media. Convolutional neural networks (CNNs) are among the most complex classifiers, but choosing the nature of the data fed to these networks is extremely important. For this reason, we capture fine texture details of input data frames using 16 Gabor filters indifferent directions and then feed them to a binary CNN classifier instead of using the red-green-blue
... Show MoreBACKGROUND: Preeclampsia (PE) is a possible etiology of obstetrical and neonatal complications which are increased in resource-limited settings and developing countries. AIM: We aimed to find out the prevalence of PE in Iraqi ladies and specific outcomes, including gestational weight gain (GWG), cesarean section (CS), preterm delivery (PD), and low birth weight (LBW). METHODS: All singleton pregnant women visiting our tertiary center for delivery were involved over 3 years. PE women were compared with non-PE ladies. Complete history and examination were done during pregnancy and after delivery by the attending obstetrician and neonatologist with full documentation in medical records. RESULTS: PE prevalence was 4.79
... Show MoreTwo experiments were carried out, the first at the College of Agriculture - University of Baghdad during spring season 2017 Everest cv. class (Elite) was used to study the effect of foliar application of calcium and magnesium and addition of humic acid to the soil on potato growth and yield, The layout of the experiment was factorial within RCBD design using three replicates. Calcium and Magnesium sprayed with concentrations (0, 500, 1000 mg.L-1), while the humic acid was added to the soil with (0, 0.75 gm.m2), The second experiment included storage of tubers produced from the spring season, with to study the effect of field treatments on improving the storability of the tubers. The results showed that the treatment of calci
... Show MoreThis research aims to distinguish the reef environment from the non-reef environment. The Oligocene-Miocene-succussion in western Iraq was selected as a case study, represented by the reefal limestone facies of the Anah Formation (Late Oligocene) deposited in reef-back reef environments, dolomitic limestone of the Euphrates Formation (Early Miocene) deposited in open sea environments, and gypsiferous marly limestone of the Fatha Formation (Middle Miocene) deposited in a lagoonal environment. The content of the rare earth elements (REEs) (La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Er, Ho, Tm, Yb, Lu, and Y) in reef facies appear to be much lower than of those in the non-reef facies. The open sea facies have a low content of REEs due to bein
... Show MoreThe goal of this research is to develop a numerical model that can be used to simulate the sedimentation process under two scenarios: first, the flocculation unit is on duty, and second, the flocculation unit is out of commission. The general equation of flow and sediment transport were solved using the finite difference method, then coded using Matlab software. The result of this study was: the difference in removal efficiency between the coded model and operational model for each particle size dataset was very close, with a difference value of +3.01%, indicating that the model can be used to predict the removal efficiency of a rectangular sedimentation basin. The study also revealed
The biosorption of Pb (II), Cd (II), and Hg (II) from simulated aqueous solutions using baker’s yeast biomass was investigated. Batch type experiments were carried out to find the equilibrium isotherm data for each component (single, binary, and ternary), and the adsorption rate constants. Kinetics pseudo-first and second order rate models applied to the adsorption data to estimate the rate constant for each solute, the results showed that the Cd (II), Pb (II), and Hg (II) uptake process followed the pseudo-second order rate model with (R2) 0.963, 0.979, and 0.960 respectively. The equilibrium isotherm data were fitted with five theoretical models. Langmuir model provides the best fitting for the experimental results with (R2) 0.992, 0
... Show MoreThe development of analytical techniques is required for the accurate and comprehensive detection and measurement of antibiotic contamination in the environment. Metronidazole is a common antibacterial, antiprotozoal, and antibiotic drug. Thiamine is a vital biological and medicinal ingredient that is involved in the metabolism of proteins, fats, and carbohydrates that produce energy. The study aims to identify the drugs in a mixture without separation to provide more information to confirm if a drug is present in a combination. Metronidazole and thiamine are two examples of pharmaceutical and environmental samples that can be identified using spectrophotometric techniques because of their low cost and simplicity of use. The operati
... Show MoreThe aim of this research is to compare traditional and modern methods to obtain the optimal solution using dynamic programming and intelligent algorithms to solve the problems of project management.
It shows the possible ways in which these problems can be addressed, drawing on a schedule of interrelated and sequential activities And clarifies the relationships between the activities to determine the beginning and end of each activity and determine the duration and cost of the total project and estimate the times used by each activity and determine the objectives sought by the project through planning, implementation and monitoring to maintain the budget assessed
... Show MoreIn this paper we present a method to analyze five types with fifteen wavelet families for eighteen different EMG signals. A comparison study is also given to show performance of various families after modifying the results with back propagation Neural Network. This is actually will help the researchers with the first step of EMG analysis. Huge sets of results (more than 100 sets) are proposed and then classified to be discussed and reach the final.