Home New Trends in Information and Communications Technology Applications Conference paper Audio Compression Using Transform Coding with LZW and Double Shift Coding Zainab J. Ahmed & Loay E. George Conference paper First Online: 11 January 2022 126 Accesses Part of the Communications in Computer and Information Science book series (CCIS,volume 1511) Abstract The need for audio compression is still a vital issue, because of its significance in reducing the data size of one of the most common digital media that is exchanged between distant parties. In this paper, the efficiencies of two audio compression modules were investigated; the first module is based on discrete cosine transform and the second module is based on discrete wavelet transform. The proposed audio compression system consists of the following steps: (1) load digital audio data, (2) transformation (i.e., using bi-orthogonal wavelet or discrete cosine transform) to decompose the audio signal, (3) quantization (depend on the used transform), (4) quantization of the quantized data that separated into two sequence vectors; runs and non-zeroes decomposition to apply the run length to reduce the long-run sequence. Each resulted vector is passed into the entropy encoder technique to implement a compression process. In this paper, two entropy encoders are used; the first one is the lossless compression method LZW and the second one is an advanced version for the traditional shift coding method called the double shift coding method. The proposed system performance is analyzed using distinct audio samples of different sizes and characteristics with various audio signal parameters. The performance of the compression system is evaluated using Peak Signal to Noise Ratio and Compression Ratio. The outcomes of audio samples show that the system is simple, fast and it causes better compression gain. The results show that the DSC encoding time is less than the LZW encoding time.
Scientific development has occupied a prominent place in the field of diagnosis, far from traditional procedures. Scientific progress and the development of cities have imposed diseases that have spread due to this development, perhaps the most prominent of which is diabetes for accurate diagnosis without examining blood samples and using image analysis by comparing two images of the affected person for no less than a period. Less than ten years ago they used artificial intelligence programs to analyze and prove the validity of this study by collecting samples of infected people and healthy people using one of the Python program libraries, which is (Open-CV) specialized in measuring changes to the human face, through which we can infer the
... Show More
Over the past few years, ear biometrics has attracted a lot of attention. It is a trusted biometric for the identification and recognition of humans due to its consistent shape and rich texture variation. The ear presents an attractive solution since it is visible, ear images are easily captured, and the ear structure remains relatively stable over time. In this paper, a comprehensive review of prior research was conducted to establish the efficacy of utilizing ear features for individual identification through the employment of both manually-crafted features and deep-learning approaches. The objective of this model is to present the accuracy rate of person identification systems based on either manually-crafted features such as D
... Show MoreThe area of character recognition has received a considerable attention by researchers all over the world during the last three decades. However, this research explores best sets of feature extraction techniques and studies the accuracy of well-known classifiers for Arabic numeral using the Statistical styles in two methods and making comparison study between them. First method Linear Discriminant function that is yield results with accuracy as high as 90% of original grouped cases correctly classified. In the second method, we proposed algorithm, The results show the efficiency of the proposed algorithms, where it is found to achieve recognition accuracy of 92.9% and 91.4%. This is providing efficiency more than the first method.
: Sound forecasts are essential elements of planning, especially for dealing with seasonality, sudden changes in demand levels, strikes, large fluctuations in the economy, and price-cutting manoeuvres for competition. Forecasting can help decision maker to manage these problems by identifying which technologies are appropriate for their needs. The proposal forecasting model is utilized to extract the trend and cyclical component individually through developing the Hodrick–Prescott filter technique. Then, the fit models of these two real components are estimated to predict the future behaviour of electricity peak load. Accordingly, the optimal model obtained to fit the periodic component is estimated using spectrum analysis and Fourier mod
... Show MoreGypseous soils are widely distributed and especially in Iraq where arid area of hot climatic is present. These soils are considered as problematic soils; therefore this work attends to improve the geotechnical properties of such soil and reduce the dangers of collapse due to wetting process. In this research, undisturbed soil sample of 30 % gypsum content from Karbala city is used. The Single Oedometer collapse test is used in order to investigate the collapse characteristics of natural soil and after treatment with 3%, 6%, 9%, 12% and 15% of Cutback Asphalt. Moreover, two selected additive percentages (9% and 12%) are used to evaluate the suitability of using the Cutback Asphalt for improvement of the bearing capacity o
... Show MoreSmart water flooding (low salinity water flooding) was mainly invested in a sandstone reservoir. The main reasons for using low salinity water flooding are; to improve oil recovery and to give a support for the reservoir pressure.
In this study, two core plugs of sandstone were used with different permeability from south of Iraq to explain the effect of water injection with different ions concentration on the oil recovery. Water types that have been used are formation water, seawater, modified low salinity water, and deionized water.
The effects of water salinity, the flow rate of water injected, and the permeability of core plugs have been studied in order to summarize the best conditions of low salinity
... Show MoreIraq has a huge network of pipelines, transport crude oil and final hydrocarbon products as well as portable water. These networks are exposed to extensive damage due to the underground corrosion processes unless suitable protection techniques are used. In this paper we collect the information of cathodic protection for pipeline in practical fields (Oil Group in Al Doura), to obtain data base to understand and optimize the design which is made by simulation for the environmental factors and cathodic protection variables also soil resistivity using wenner four terminal methods for survey sites; and soil pH investigations were recorded for these selected fields were within 7-8, and recording the anodes voltage and its related currents for
... Show MoreThe frequent and widespread use of medicines and personal care products, particularly in the residential environment, tends to raise concerns about environmental and human health impacts. On the other hand, carbon dioxide accumulation in the atmosphere is a problem with numerous environmental consequences. Microalgae are being used to bioremediate toxins and capture CO2. The current study aimed to confirm the possibility of removing pharmaceutical contaminant (Ranitidine) at different concentrations by using the Chlorella Sorokiniana MH923013 microalgae strain during the growth time. As part of the experiment, carbon dioxide was added to the culture medium three times per week. Explanatory results revealed that gas doses directly affect
... Show MoreIn this paper, we used four classification methods to classify objects and compareamong these methods, these are K Nearest Neighbor's (KNN), Stochastic Gradient Descentlearning (SGD), Logistic Regression Algorithm(LR), and Multi-Layer Perceptron (MLP). Weused MCOCO dataset for classification and detection the objects, these dataset image wererandomly divided into training and testing datasets at a ratio of 7:3, respectively. In randomlyselect training and testing dataset images, converted the color images to the gray level, thenenhancement these gray images using the histogram equalization method, resize (20 x 20) fordataset image. Principal component analysis (PCA) was used for feature extraction, andfinally apply four classification metho
... Show More