Preferred Language
Articles
/
rhdSOI8BVTCNdQwCEGNZ
Hybrid DWT-DCT compression algorithm & a new flipping block with an adaptive RLE method for high medical image compression ratio
...Show More Authors

Huge number of medical images are generated and needs for more storage capacity and bandwidth for transferring over the networks. Hybrid DWT-DCT compression algorithm is applied to compress the medical images by exploiting the features of both techniques. Discrete Wavelet Transform (DWT) coding is applied to image YCbCr color model which decompose image bands into four subbands (LL, HL, LH and HH). The LL subband is transformed into low and high frequency components using Discrete Cosine Transform (DCT) to be quantize by scalar quantization that was applied on all image bands, the quantization parameters where reduced by half for the luminance band while it is the same for the chrominance bands to preserve the image quality, the zigzag scan is applied on the quantized coefficients and the output are encoded using DPCM, shift optimizer and shift coding for DC while adaptive RLE, shift optimizer then shift coding applied for AC, the other subbands; LH, HL and HH are compressed using the scalar quantization, Quadtree and shift optimizer then shift coding. In this paper, a new flipping block with an adaptive RLE is proposed and applied for image enhancement. After applying DCT system and scalar quantization, huge number of zeros produced with less number of other values, so an adaptive RLE is used to encode this RUN of zeros which results with more compression.Standard medical images are selected to be used as testing image materials such as CT-Scan, X-Ray, MRI these images are specially used for researches as a testing samples. The results showed high compression ratio with high quality reconstructed images  

Crossref
View Publication
Publication Date
Fri May 17 2013
Journal Name
International Journal Of Computer Applications
Fast Lossless Compression of Medical Images based on Polynomial
...Show More Authors

In this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.

View Publication Preview PDF
Crossref (7)
Crossref
Publication Date
Sat May 30 2020
Journal Name
Neuroquantology Journal
The Effect of Re-Use of Lossy JPEG Compression Algorithm on the Quality of Satellite Image
...Show More Authors

In this study, an analysis of re-using the JPEG lossy algorithm on the quality of satellite imagery is presented. The standard JPEG compression algorithm is adopted and applied using Irfan view program, the rang of JPEG quality that used is 50-100.Depending on the calculated satellite image quality variation, the maximum number of the re-use of the JPEG lossy algorithm adopted in this study is 50 times. The image quality degradation to the JPEG quality factor and the number of re-use of the JPEG algorithm to store the satellite image is analyzed.

View Publication Preview PDF
Scopus (4)
Crossref (4)
Scopus Crossref
Publication Date
Sat Oct 30 2021
Journal Name
Iraqi Journal Of Science
Small Binary Codebook Design for Image Compression Depending on Rotating Blocks
...Show More Authors

     The searching process using a binary codebook of combined Block Truncation Coding (BTC) method and Vector Quantization (VQ), i.e. a full codebook search for each input image vector to find the best matched code word in the codebook, requires a long time.   Therefore, in this paper, after designing a small binary codebook, we adopted a new method by rotating each binary code word in this codebook into 900 to 2700 step 900 directions. Then, we systematized each code word depending on its angle  to involve four types of binary code books (i.e. Pour when , Flat when  , Vertical when, or Zigzag). The proposed scheme was used for decreasing the time of the coding procedure, with very small distortion per block, by designing s

... Show More
Scopus (4)
Crossref (1)
Scopus Crossref
Publication Date
Wed Mar 10 2021
Journal Name
Baghdad Science Journal
Improving Fractal Image Compression Scheme through Quantization Operation
...Show More Authors

We explore the transform coefficients of fractal and exploit new method to improve the compression capabilities of these schemes. In most of the standard encoder/ decoder systems the quantization/ de-quantization managed as a separate step, here we introduce new way (method) to work (managed) simultaneously. Additional compression is achieved by this method with high image quality as you will see later.

View Publication Preview PDF
Crossref
Publication Date
Sat Jun 01 2019
Journal Name
International Journal Of Computer Science And Mobile Computing
Hierarchal Polynomial Coding of Grayscale Lossless Image Compression
...Show More Authors

Publication Date
Wed Jun 01 2022
Journal Name
V. International Scientific Congress Of Pure, Applied And Technological Sciences
Lightweight Image Compression Using Polynomial and Transform Coding
...Show More Authors

Publication Date
Mon Jun 05 2023
Journal Name
Journal Of Engineering
Image Compression Using 3-D Two-Level Technique
...Show More Authors

In this paper three techniques for image compression are implemented. The proposed techniques consist of three dimension (3-D) two level discrete wavelet transform (DWT), 3-D two level discrete multi-wavelet transform (DMWT) and 3-D two level hybrid (wavelet-multiwavelet transform) technique. Daubechies and Haar are used in discrete wavelet transform and Critically Sampled preprocessing is used in discrete multi-wavelet transform. The aim is to maintain to increase the compression ratio (CR) with respect to increase the level of the transformation in case of 3-D transformation, so, the compression ratio is measured for each level. To get a good compression, the image data properties, were measured, such as, image entropy (He), percent r

... Show More
View Publication Preview PDF
Publication Date
Thu Feb 07 2019
Journal Name
Journal Of The College Of Education For Women
EFFICIENCY SPIHT IN COMPRESSION AND QUALITY OF IMAGE
...Show More Authors

Image compression is an important tool to reduce the bandwidth and storage
requirements of practical image systems. To reduce the increasing demand of storage
space and transmission time compression techniques are the need of the day. Discrete
time wavelet transforms based image codec using Set Partitioning In Hierarchical
Trees (SPIHT) is implemented in this paper. Mean Square Error (MSE), Peak Signal
to Noise Ratio (PSNR) and Maximum Difference (MD) are used to measure the
picture quality of reconstructed image. MSE and PSNR are the most common picture
quality measures. Different kinds of test images are assessed in this work with
different compression ratios. The results show the high efficiency of SPIHT algori

... Show More
View Publication Preview PDF
Publication Date
Sat Jul 19 2025
Journal Name
Journal Of Engineering
Image Compression Using 3-D Two-Level Techniques
...Show More Authors

In this paper three techniques for image compression are implemented. The proposed techniques consist of three dimension (3-D) two level discrete wavelet transform (DWT), 3-D two level discrete multi-wavelet transform (DMWT) and 3-D two level hybrid (wavelet-multiwavelet transform) technique. Daubechies and Haar are used in discrete wavelet transform and Critically Sampled preprocessing is used in discrete multi-wavelet transform. The aim is to maintain to increase the compression ratio (CR) with respect to increase the level of the transformation in case of 3-D transformation, so, the compression ratio is measured for each level. To get a good compression, the image data properties, were measured, such as, image entropy (He), percent root-

... Show More
View Publication
Publication Date
Sat Apr 15 2023
Journal Name
Journal Of Robotics
A New Proposed Hybrid Learning Approach with Features for Extraction of Image Classification
...Show More Authors

Image classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class

... Show More
View Publication
Scopus (5)
Crossref (4)
Scopus Clarivate Crossref