During the two last decades ago, audio compression becomes the topic of many types of research due to the importance of this field which reflecting on the storage capacity and the transmission requirement. The rapid development of the computer industry increases the demand for audio data with high quality and accordingly, there is great importance for the development of audio compression technologies, lossy and lossless are the two categories of compression. This paper aims to review the techniques of the lossy audio compression methods, summarize the importance and the uses of each method.
The bandwidth requirements of telecommunication network users increased rapidly during the last decades. Optical access technologies must provide the bandwidth demand for each user. The passive optical access networks (PONs) support a maximum data rate of 100 Gbps by using the Orthogonal Frequency Division Multiplexing (OFDM) technique in the optical access network. In this paper, the optical broadband access networks with many techniques from Time Division Multiplexing Passive Optical Networks (TDM PON) to Orthogonal Frequency Division Multiplex Passive Optical Networks (OFDM PON) are presented. The architectures, advantages, disadvantages, and main parameters of these optical access networks are discussed and reported which have many ad
... Show MoreTechnological development in the last years leads to increase the access speed in the internet networks that allow a huge number of users watching videos online.
Video streaming important type in the real-time video sessions and one of the most popular applications in networking systems. The Quality of Service (QoS) techniques give us indicate to the effect of multimedia traffic on the network performance, but this techniques do not reflect the user perception. Using QoS and Quality of Experience (QoE) together can give guarantee to the distribution of video content according to video content characteristics and the user experience .
To measure the users’ perceptio
... Show More<p>In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on
... Show MoreColor image compression is a good way to encode digital images by decreasing the number of bits wanted to supply the image. The main objective is to reduce storage space, reduce transportation costs and maintain good quality. In current research work, a simple effective methodology is proposed for the purpose of compressing color art digital images and obtaining a low bit rate by compressing the matrix resulting from the scalar quantization process (reducing the number of bits from 24 to 8 bits) using displacement coding and then compressing the remainder using the Mabel ZF algorithm Welch LZW. The proposed methodology maintains the quality of the reconstructed image. Macroscopic and
The research aims to know the traits or characteristics of woman in terms of her external
appearance, motives for her behavior, feelings, mood and ability.
It, moreover, seeks woman’s relationship with others as it is presented by the Iraqi
satirical television show “ State of Melon “.
The researcher adopted for that survey approach using the method of content analysis
to study the research sample represented by “ State of Melon “ show which was
exposed through the screen of a group of channels:
“Hona Baghdad Satellite Channel, then Asia Satellite Channel, Dijla Satellite Channel,
and UTV Satellite Channel.”For this, the researcher used Margaret Gallagher’s Model to analyze the image of
woman in
The study includes collection of data about cholera disease from six health centers from nine locations with 2500km2 and a population of 750000individual. The average of infection for six centers during the 2000-2003 was recorded. There were 3007 cases of diarrhea diagnosed as cholera caused by Vibrio cholerae. The percentage of male infection was 14. 7% while for female were 13. 2%. The percentage of infection for children (less than one year) was 6.1%, it while for the age (1-5 years) was 6.9%and for the ages more than 5 years was 14.5%.The total percentage of the patients stayed in hospital was 7.7%(4.2%for male and 3.4%for female). The bacteria was isolated and identified from 7cases in the Central Laboratory for Health in Baghdad. In
... Show MoreThis paper aims to explain the effect of workplace respect on employee performance at Abu Ghraib Dairy Factory (AGDF). For achieving the research aim, the analytical and descriptive approach was chosen using a questionnaire tool for collecting data. It covers 22 items; ten of them for the workplace respect variable and twelve items for the employee performance variable. The research population involved human resources who work at AGDF in Baghdad within two administrative levels (top and middle). We conducted a purposive stratified sample approach. It was distributed 70 questionnaire forms, and 65 forms were received. However, six of them had missing data and did not include in the final data analysis. The main results are t
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for
The main aim of image compression is to reduce the its size to be able for transforming and storage, therefore many methods appeared to compress the image, one of these methods is "Multilayer Perceptron ". Multilayer Perceptron (MLP) method which is artificial neural network based on the Back-Propagation algorithm for compressing the image. In case this algorithm depends upon the number of neurons in the hidden layer only the above mentioned will not be quite enough to reach the desired results, then we have to take into consideration the standards which the compression process depend on to get the best results. We have trained a group of TIFF images with the size of (256*256) in our research, compressed them by using MLP for each
... Show More