Home New Trends in Information and Communications Technology Applications Conference paper Audio Compression Using Transform Coding with LZW and Double Shift Coding Zainab J. Ahmed & Loay E. George Conference paper First Online: 11 January 2022 126 Accesses Part of the Communications in Computer and Information Science book series (CCIS,volume 1511) Abstract The need for audio compression is still a vital issue, because of its significance in reducing the data size of one of the most common digital media that is exchanged between distant parties. In this paper, the efficiencies of two audio compression modules were investigated; the first module is based on discrete cosine transform and the second module is based on discrete wavelet transform. The proposed audio compression system consists of the following steps: (1) load digital audio data, (2) transformation (i.e., using bi-orthogonal wavelet or discrete cosine transform) to decompose the audio signal, (3) quantization (depend on the used transform), (4) quantization of the quantized data that separated into two sequence vectors; runs and non-zeroes decomposition to apply the run length to reduce the long-run sequence. Each resulted vector is passed into the entropy encoder technique to implement a compression process. In this paper, two entropy encoders are used; the first one is the lossless compression method LZW and the second one is an advanced version for the traditional shift coding method called the double shift coding method. The proposed system performance is analyzed using distinct audio samples of different sizes and characteristics with various audio signal parameters. The performance of the compression system is evaluated using Peak Signal to Noise Ratio and Compression Ratio. The outcomes of audio samples show that the system is simple, fast and it causes better compression gain. The results show that the DSC encoding time is less than the LZW encoding time.
E-Learning packages are content and instructional methods delivered on a computer
(whether on the Internet, or an intranet), and designed to build knowledge and skills related to
individual or organizational goals. This definition addresses: The what: Training delivered
in digital form. The how: By content and instructional methods, to help learn the content.
The why: Improve organizational performance by building job-relevant knowledge and
skills in workers.
This paper has been designed and implemented a learning package for Prolog Programming
Language. This is done by using Visual Basic.Net programming language 2010 in
conjunction with the Microsoft Office Access 2007. Also this package introduces several
fac
Big data analysis has important applications in many areas such as sensor networks and connected healthcare. High volume and velocity of big data bring many challenges to data analysis. One possible solution is to summarize the data and provides a manageable data structure to hold a scalable summarization of data for efficient and effective analysis. This research extends our previous work on developing an effective technique to create, organize, access, and maintain summarization of big data and develops algorithms for Bayes classification and entropy discretization of large data sets using the multi-resolution data summarization structure. Bayes classification and data discretization play essential roles in many learning algorithms such a
... Show More إن المقصود باختبارات حسن المطابقة هو التحقق من فرضية العدم القائمة على تطابق مشاهدات أية عينة تحت الدراسة لتوزيع احتمالي معين وترد مثل هكذا حالات في التطبيق العملي بكثرة وفي كافة المجالات وعلى الأخص بحوث علم الوراثة والبحوث الطبية والبحوث الحياتية ,عندما اقترح كلا من Shapiro والعالم Wilk عام 1965 اختبار حسن المطابقة الحدسي مع معالم القياس
(
In this research a computational simulation has been carried out on the design and properties of the electrostatic mirror and a mathematical expression has been suggested to represent the axial potential of an electrostatic mirror. The electron beam path using the Bimurzaev technique had been investigated as mirror trajectory with the aid of Runge – Kutta method. The spherical and chromatic aberration coefficients of mirror has computed and normalized in terms of the focal length. The choice of the mirror depends on the operational requirements. The Electrode shape of mirror two electrodes has been determined by using package SIMION computer program. Computations have shown that the suggested potentials giv
... Show MoreData Driven Requirement Engineering (DDRE) represents a vision for a shift from the static traditional methods of doing requirements engineering to dynamic data-driven user-centered methods. Data available and the increasingly complex requirements of system software whose functions can adapt to changing needs to gain the trust of its users, an approach is needed in a continuous software engineering process. This need drives the emergence of new challenges in the discipline of requirements engineering to meet the required changes. The problem in this study was the method in data discrepancies which resulted in the needs elicitation process being hampered and in the end software development found discrepancies and could not meet the need
... Show MoreThe "Nudge" Theory is considered one of the most recent theories, which is clear in the economic, health, and educational sectors, due to the intensity of studies on it and its applications, but it has not yet been included in crime prevention studies. The use of Nudge theory appears to enrich the theory in the field of crime prevention, and to provide modern, effective, and implementable mechanisms.
The study deals with the "integrative review" approach, which is a distinctive form of research that generates new knowledge on a topic through reviewing, criticizing, and synthesizing representative literature on the topic in an integrated manner so that new frameworks and perspectives are created around it.
The study is bas
... Show MoreThis research aims to investigate the color distribution of a huge sample of 613654 galaxies from the Sloan Digital Sky Survey (SDSS). Those galaxies are at a redshift of 0.001 - 0.5 and have magnitudes of g = 17 - 20. Five subsamples of galaxies at redshifts of (0.001 - 0.1), (0.1 - 0.2), (0.2 - 0.3), (0.3 - 0.4) and (0.4 - 0.5) have been extracted from the main sample. The color distributions (u-g), (g-r) and (u-r) have been produced and analysed using a Matlab code for the main sample as well as all five subsamples. Then a bimodal Gaussian fit to color distributions of data that have been carried out using minimum chi-square in Microsoft Office Excel. The results showed that the color distributions of the main sample and
... Show MoreIn this work, copper substituted cobalt ferrite nanoparticles with
chemical formula Co1-xCuxFe2O4 (x=0, 0.3, and 0.7), has been
synthesized via hydrothermal preparation method. The structure of
the prepared materials was characterized by X-ray diffraction (XRD).
The (XRD) patterns showed single phase spinel ferrite structure.
Average crystallite size (D), lattice constant (a), and crystal density
(dx) have been calculated from the most intense peak (311).
Comparative standardization also performed using smaller average
particle size (D) on the XRD patterns of as-prepared ferrite samples
in order to select most convenient hydrothermal synthesis conditions
to get ferrite materials with smallest average particl
This study focused on spectral clustering (SC) and three-constraint affinity matrix spectral clustering (3CAM-SC) to determine the number of clusters and the membership of the clusters of the COST 2100 channel model (C2CM) multipath dataset simultaneously. Various multipath clustering approaches solve only the number of clusters without taking into consideration the membership of clusters. The problem of giving only the number of clusters is that there is no assurance that the membership of the multipath clusters is accurate even though the number of clusters is correct. SC and 3CAM-SC aimed to solve this problem by determining the membership of the clusters. The cluster and the cluster count were then computed through the cluster-wise J
... Show MoreThe automatic estimation of speaker characteristics, such as height, age, and gender, has various applications in forensics, surveillance, customer service, and many human-robot interaction applications. These applications are often required to produce a response promptly. This work proposes a novel approach to speaker profiling by combining filter bank initializations, such as continuous wavelets and gammatone filter banks, with one-dimensional (1D) convolutional neural networks (CNN) and residual blocks. The proposed end-to-end model goes from the raw waveform to an estimated height, age, and gender of the speaker by learning speaker representation directly from the audio signal without relying on handcrafted and pre-computed acou
... Show More