The searching process using a binary codebook of combined Block Truncation Coding (BTC) method and Vector Quantization (VQ), i.e. a full codebook search for each input image vector to find the best matched code word in the codebook, requires a long time. Therefore, in this paper, after designing a small binary codebook, we adopted a new method by rotating each binary code word in this codebook into 900 to 2700 step 900 directions. Then, we systematized each code word depending on its angle to involve four types of binary code books (i.e. Pour when , Flat when , Vertical when, or Zigzag). The proposed scheme was used for decreasing the time of the coding procedure, with very small distortion per block, by designing small binary codebook, then rotating each block in it. Moreover, it can be used for improving the efficiency of the coding process even further with the decrease in the bit rate (i.e. increasing the compression ratio(.
The past years have seen a rapid development in the area of image compression techniques, mainly due to the need of fast and efficient techniques for storage and transmission of data among individuals. Compression is the process of representing the data in a compact form rather than in its original or incompact form. In this paper, integer implementation of Arithmetic Coding (AC) and Discreet Cosine Transform (DCT) were applied to colored images. The DCT was applied using the YCbCr color model. The transformed image was then quantized with the standard quantization tables for luminance and chrominance. The quantized coefficients were scanned by zigzag scan and the output was encoded using AC. The results showed a decent compression ratio
... Show MoreCurrently, with the huge increase in modern communication and network applications, the speed of transformation and storing data in compact forms are pressing issues. Daily an enormous amount of images are stored and shared among people every moment, especially in the social media realm, but unfortunately, even with these marvelous applications, the limited size of sent data is still the main restriction's, where essentially all these applications utilized the well-known Joint Photographic Experts Group (JPEG) standard techniques, in the same way, the need for construction of universally accepted standard compression systems urgently required to play a key role in the immense revolution. This review is concerned with Different
... Show MoreIn this paper, a simple medical image compression technique is proposed, that based on utilizing the residual of autoregressive model (AR) along with bit-plane slicing (BPS) to exploit the spatial redundancy efficiently. The results showed that the compression performance of the proposed techniques is improved about twice on average compared to the traditional autoregressive, along with preserving the image quality due to considering the significant layers only of high image contribution effects.
Fractal image compression gives some desirable properties like fast decoding image, and very good rate-distortion curves, but suffers from a high encoding time. In fractal image compression a partitioning of the image into ranges is required. In this work, we introduced good partitioning process by means of merge approach, since some ranges are connected to the others. This paper presents a method to reduce the encoding time of this technique by reducing the number of range blocks based on the computing the statistical measures between them . Experimental results on standard images show that the proposed method yields minimize (decrease) the encoding time and remain the quality results passable visually.
In this paper, we describe a new method for image denoising. We analyze properties of the Multiwavelet coefficients of natural images. Also it suggests a method for computing the Multiwavelet transform using the 1st order approximation. This paper describes a simple and effective model for noise removal through suggesting a new technique for retrieving the image by allowing us to estimate it from the noisy image. The proposed algorithm depends on mixing both soft-thresholds with Mean filter and applying concurrently on noisy image by dividing into blocks of equal size (for concurrent processed to increase the performance of the enhancement process and to decease the time that is needed for implementation by applying the proposed algorith
... Show More