Preferred Language
Articles
/
1BcCMI8BVTCNdQwCx165
Image Compression based on Fixed Predictor Multiresolution Thresholding of Linear Polynomial Nearlossless Techniques

Image compression is a serious issue in computer storage and transmission,  that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the  mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless compression scheme of first stage that corresponding to second stage. The tested results shown are promising  in both two stages, that implicilty enhanced the performance of traditional polynomial model in terms of compression ratio , and preresving image quality.

Crossref
View Publication
Publication Date
Sat Nov 28 2020
Journal Name
Iraqi Journal Of Science
Color Image Compression System by using Block Categorization Based on Spatial Details and DCT Followed by Improved Entropy Encoder

In this paper, a new high-performance lossy compression technique based on DCT is proposed. The image is partitioned into blocks of a size of NxN (where N is multiple of 2), each block is categorized whether it is high frequency (uncorrelated block) or low frequency (correlated block) according to its spatial details, this done by calculating the energy of block by taking the absolute sum of differential pulse code modulation (DPCM) differences between pixels to determine the level of correlation by using a specified threshold value. The image blocks will be scanned and converted into 1D vectors using horizontal scan order. Then, 1D-DCT is applied for each vector to produce transform coefficients. The transformed coefficients will be qua

... Show More
Scopus (5)
Crossref (3)
Scopus Crossref
View Publication Preview PDF
Publication Date
Sat Jul 01 2017
Journal Name
Diyala Journal For Pure Science
Crossref
View Publication
Publication Date
Sat Oct 30 2021
Journal Name
Iraqi Journal Of Science
Recursive Prediction for Lossless Image Compression

     This paper introduced an algorithm for lossless image compression to compress natural and medical images. It is based on utilizing various casual fixed predictors of one or two dimension to get rid of the correlation or spatial redundancy embedded between image pixel values then a recursive polynomial model of a linear base is used.

The experimental results of the proposed compression method are promising in terms of preserving the details and the quality of the reconstructed images as well improving the compression ratio as compared with the extracted results of a traditional linear predicting coding system.

Scopus (1)
Crossref (1)
Scopus Crossref
View Publication Preview PDF
Publication Date
Wed Jan 01 2020
Journal Name
International Journal Of Software & Hardware Research In Engineering
Publication Date
Mon Jul 01 2019
Journal Name
International Journal Of Computer Science And Mobile Computing
Publication Date
Sun Sep 24 2023
Journal Name
Journal Of Al-qadisiyah For Computer Science And Mathematics
Iris Data Compression Based on Hexa-Data Coding

Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the origin

... Show More
Crossref
View Publication
Publication Date
Sat Oct 30 2021
Journal Name
Iraqi Journal Of Science
Small Binary Codebook Design for Image Compression Depending on Rotating Blocks

     The searching process using a binary codebook of combined Block Truncation Coding (BTC) method and Vector Quantization (VQ), i.e. a full codebook search for each input image vector to find the best matched code word in the codebook, requires a long time.   Therefore, in this paper, after designing a small binary codebook, we adopted a new method by rotating each binary code word in this codebook into 900 to 2700 step 900 directions. Then, we systematized each code word depending on its angle  to involve four types of binary code books (i.e. Pour when , Flat when  , Vertical when, or Zigzag). The proposed scheme was used for decreasing the time of the coding procedure, with very small distortion per block, by designing s

... Show More
Scopus Crossref
Publication Date
Sat Oct 30 2021
Journal Name
Iraqi Journal Of Science
Small Binary Codebook Design for Image Compression Depending on Rotating Blocks

     The searching process using a binary codebook of combined Block Truncation Coding (BTC) method and Vector Quantization (VQ), i.e. a full codebook search for each input image vector to find the best matched code word in the codebook, requires a long time.   Therefore, in this paper, after designing a small binary codebook, we adopted a new method by rotating each binary code word in this codebook into 900 to 2700 step 900 directions. Then, we systematized each code word depending on its angle  to involve four types of binary code books (i.e. Pour when , Flat when  , Vertical when, or Zigzag). The proposed scheme was used for decreasing the time of the coding pro

... Show More
Scopus Crossref
View Publication Preview PDF
Publication Date
Sun Nov 01 2020
Journal Name
Iop Conference Series: Materials Science And Engineering
Developed JPEG Algorithm Applied in Image Compression
Abstract<p>JPEG is most popular image compression and encoding, this technique is widely used in many applications (images, videos and 3D animations). Meanwhile, researchers are very interested to develop this massive technique to compress images at higher compression ratios with keeping image quality as much as possible. For this reason in this paper we introduce a developed JPEG based on fast DCT and removed most of zeros and keeps their positions in a transformed block. Additionally, arithmetic coding applied rather than Huffman coding. The results showed up, the proposed developed JPEG algorithm has better image quality than traditional JPEG techniques.</p>
Scopus (7)
Crossref (6)
Scopus Crossref
View Publication
Publication Date
Sun Jun 11 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Artificial Neural Network for TIFF Image Compression

The main aim of image compression is to reduce the its size to be able for transforming and storage, therefore many methods appeared to compress the image, one of these methods is "Multilayer Perceptron ". Multilayer Perceptron (MLP) method which is artificial neural network based on the Back-Propagation algorithm for compressing the image. In case this algorithm depends upon the number of neurons in the hidden layer only the above mentioned will not be quite enough to reach the desired results, then we have to take into consideration the standards which the compression process depend on to get the best results. We have trained a group of TIFF images with the size of (256*256)  in our research, compressed them by using MLP for each

... Show More
Crossref
View Publication Preview PDF