A common approach to the color image compression was started by transform
the red, green, and blue or (RGB) color model to a desire color model, then applying
compression techniques, and finally retransform the results into RGB model In this
paper, a new color image compression method based on multilevel block truncation
coding (MBTC) and vector quantization is presented. By exploiting human visual
system response for color, bit allocation process is implemented to distribute the bits
for encoding in more effective away.
To improve the performance efficiency of vector quantization (VQ),
modifications have been implemented. To combines the simple computational and
edge preservation properties of MBTC with high compression ratio and good
subjective performance of modified VQ, a hybrid MBTC- modified VQ color image
compression method is presented. The analysis results have indicated the
performance of the suggested method is better, where the constructed images are less
distorted and compressed with higher factor(59:1).
In today's world, digital image storage and transmission play an essential role,where images are mainly involved in data transfer. Digital images usually take large storage space and bandwidth for transmission, so image compression is important in data communication. This paper discusses a unique and novel lossy image compression approach. Exactly 50% of image pixels are encoded, and other 50% pixels are excluded. The method uses a block approach. Pixels of the block are transformed with a novel transform. Pixel nibbles are mapped as a single bit in a transform table generating more zeros, which helps achieve compression. Later, inverse transform is applied in reconstruction, and a single bit value from the table is rem
... Show MoreIn recent years images have been used widely by online social networks providers or numerous organizations such as governments, police departments, colleges, universities, and private companies. It held in vast databases. Thus, efficient storage of such images is advantageous and its compression is an appealing application. Image compression generally represents the significant image information compactly with a smaller size of bytes while insignificant image information (redundancy) already been removed for this reason image compression has an important role in data transfer and storage especially due to the data explosion that is increasing significantly. It is a challenging task since there are highly complex unknown correlat
... Show MoreThe wavelet transform has become a useful computational tool for a variety of signal and image processing applications.
The aim of this paper is to present the comparative study of various wavelet filters. Eleven different wavelet filters (Haar, Mallat, Symlets, Integer, Conflict, Daubechi 1, Daubechi 2, Daubechi 4, Daubechi 7, Daubechi 12 and Daubechi 20) are used to compress seven true color images of 256x256 as a samples. Image quality, parameters such as peak signal-to-noise ratio (PSNR), normalized mean square error have been used to evaluate the performance of wavelet filters.
In our work PSNR is used as a measure of accuracy performanc
... Show MoreThe computer vision branch of the artificial intelligence field is concerned with
developing algorithms for analyzing image content. Data may be compressed by
reducing the redundancy in the original data, but this makes the data have more
errors. In this paper image compression based on a new method that has been
created for image compression which is called Five Modulus Method (FMM). The
new method consists of converting each pixel value in an (4x4, 8×8,16x16) block
into a multiple of 5 for each of the R, G and B arrays. After that, the new values
could be divided by 5 to get new values which are 6-bit length for each pixel and it
is less in storage space than the original value which is 8-bits.
In this study, an efficient compression system is introduced, it is based on using wavelet transform and two types of 3Dimension (3D) surface representations (i.e., Cubic Bezier Interpolation (CBI)) and 1 st order polynomial approximation. Each one is applied on different scales of the image; CBI is applied on the wide area of the image in order to prune the image components that show large scale variation, while the 1 st order polynomial is applied on the small area of residue component (i.e., after subtracting the cubic Bezier from the image) in order to prune the local smoothing components and getting better compression gain. Then, the produced cubic Bezier surface is subtracted from the image signal to get the residue component. Then, t
... Show MoreThe rapid development of telemedicine services and the requirements for exchanging medical information between physicians, consultants, and health institutions have made the protection of patients’ information an important priority for any future e-health system. The protection of medical information, including the cover (i.e. medical image), has a specificity that slightly differs from the requirements for protecting other information. It is necessary to preserve the cover greatly due to its importance on the reception side as medical staff use this information to provide a diagnosis to save a patient's life. If the cover is tampered with, this leads to failure in achieving the goal of telemedicine. Therefore, this work provides an in
... Show MoreTo maintain a sustained competitive position in the contemporary environment of knowledge economy, organizations as an open social systems must have an ability to learn and know how to adapt to rapid changes in a proper fashion so that organizational objectives will be achieved efficiently and effectively. A multilevel approach is adopted proposing that organizational learning suffers from the lack of interest about the strategic competitive performance of the organization. This remains implicit almost in all models of organizational learning and there is little focus on how learning organizations achieve sustainable competitive advantage . A dynamic model that captures t
... Show MoreThe principal goal guiding any designed encryption algorithm must be security against unauthorized attackers. Within the last decade, there has been a vast increase in the communication of digital computer data in both the private and public sectors. Much of this information has a significant value; therefore it does require the protection by design strength algorithm to cipher it. This algorithm defines the mathematical steps required to transform data into a cryptographic cipher and also to transform the cipher back to the original form. The Performance and security level is the main characteristics that differentiate one encryption algorithm from another. In this paper suggested a new technique to enhance the performance of the Data E
... Show MoreCompressing an image and reconstructing it without degrading its original quality is one of the challenges that still exist now a day. A coding system that considers both quality and compression rate is implemented in this work. The implemented system applies a high synthetic entropy coding schema to store the compressed image at the smallest size as possible without affecting its original quality. This coding schema is applied with two transform-based techniques, one with Discrete Cosine Transform and the other with Discrete Wavelet Transform. The implemented system was tested with different standard color images and the obtained results with different evaluation metrics have been shown. A comparison was made with some previous rel
... Show MoreIris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the origin
... Show More