Home New Trends in Information and Communications Technology Applications Conference paper Audio Compression Using Transform Coding with LZW and Double Shift Coding Zainab J. Ahmed & Loay E. George Conference paper First Online: 11 January 2022 126 Accesses Part of the Communications in Computer and Information Science book series (CCIS,volume 1511) Abstract The need for audio compression is still a vital issue, because of its significance in reducing the data size of one of the most common digital media that is exchanged between distant parties. In this paper, the efficiencies of two audio compression modules were investigated; the first module is based on discrete cosine transform and the second module is based on discrete wavelet transform. The proposed audio compression system consists of the following steps: (1) load digital audio data, (2) transformation (i.e., using bi-orthogonal wavelet or discrete cosine transform) to decompose the audio signal, (3) quantization (depend on the used transform), (4) quantization of the quantized data that separated into two sequence vectors; runs and non-zeroes decomposition to apply the run length to reduce the long-run sequence. Each resulted vector is passed into the entropy encoder technique to implement a compression process. In this paper, two entropy encoders are used; the first one is the lossless compression method LZW and the second one is an advanced version for the traditional shift coding method called the double shift coding method. The proposed system performance is analyzed using distinct audio samples of different sizes and characteristics with various audio signal parameters. The performance of the compression system is evaluated using Peak Signal to Noise Ratio and Compression Ratio. The outcomes of audio samples show that the system is simple, fast and it causes better compression gain. The results show that the DSC encoding time is less than the LZW encoding time.
Offline Arabic handwritten recognition lies in a major field of challenge due to the changing styles of writing from one individual to another. It is difficult to recognize the Arabic handwritten because of the same appearance of the different characters. In this paper a proposed method for Offline Arabic handwritten recognition. The proposed method for recognition hand-written Arabic word without segmentation to sub letters based on feature extraction scale invariant feature transform (SIFT) and support vector machines (SVMs) to enhance the recognition accuracy. The proposed method experimented using (AHDB) database. The experiment result show (99.08) recognition rate.
A proposed feature extraction algorithm for handwriting Arabic words. The proposed method uses a 4 levels discrete wavelet transform (DWT) on binary image. sliding window on wavelet space and computes the stander derivation for each window. The extracted features were classified with multiple Support Vector Machine (SVM) classifiers. The proposed method simulated with a proposed data set from different writers. The experimental results of the simulation show 94.44% recognition rate.
A field experiment was conducted at the field of the Dept. of Field Crop Sci. / College of Agriculture / University of Baghdad . The objective was to determine the values of relative constant of three – way and double crosses of maize . Ten inbreds were used and crossed during spring and fall seasons of 2009 to produce three - way and double crosses , and ten hybrids were taken from each group . The ten hybrids were grown and selfed during spring 2010 to produce 2 seed . Three way and double crosses were sown with their parents and 2 seed during fall 2010 in RCBD with four replicates . Leaf area , total dry matter , row/ear , grain/ear , grain weight and grain weight/plant of hybrids , parents and 2 plants were taken . Results showed that
... Show MoreElzaki Transform Adomian decomposition technique (ETADM), which an elegant combine, has been employed in this work to solve non-linear Riccati matrix differential equations. Solutions are presented to demonstrate the relevance of the current approach. With the use of figures, the results of the proposed strategy are displayed and evaluated. It is demonstrated that the suggested approach is effective, dependable, and simple to apply to a range of related scientific and technical problems.
The main purpose of the work is to apply a new method, so-called LTAM, which couples the Tamimi and Ansari iterative method (TAM) with the Laplace transform (LT). This method involves solving a problem of non-fatal disease spread in a society that is assumed to have a fixed size during the epidemic period. We apply the method to give an approximate analytic solution to the nonlinear system of the intended model. Moreover, the absolute error resulting from the numerical solutions and the ten iterations of LTAM approximations of the epidemic model, along with the maximum error remainder, were calculated by using MATHEMATICA® 11.3 program to illustrate the effectiveness of the method.
Currently, with the huge increase in modern communication and network applications, the speed of transformation and storing data in compact forms are pressing issues. Daily an enormous amount of images are stored and shared among people every moment, especially in the social media realm, but unfortunately, even with these marvelous applications, the limited size of sent data is still the main restriction's, where essentially all these applications utilized the well-known Joint Photographic Experts Group (JPEG) standard techniques, in the same way, the need for construction of universally accepted standard compression systems urgently required to play a key role in the immense revolution. This review is concerned with Different
... Show MoreIn this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.
In this paper, a simple medical image compression technique is proposed, that based on utilizing the residual of autoregressive model (AR) along with bit-plane slicing (BPS) to exploit the spatial redundancy efficiently. The results showed that the compression performance of the proposed techniques is improved about twice on average compared to the traditional autoregressive, along with preserving the image quality due to considering the significant layers only of high image contribution effects.