In this study, an efficient compression system is introduced, it is based on using wavelet transform and two types of 3Dimension (3D) surface representations (i.e., Cubic Bezier Interpolation (CBI)) and 1 st order polynomial approximation. Each one is applied on different scales of the image; CBI is applied on the wide area of the image in order to prune the image components that show large scale variation, while the 1 st order polynomial is applied on the small area of residue component (i.e., after subtracting the cubic Bezier from the image) in order to prune the local smoothing components and getting better compression gain. Then, the produced cubic Bezier surface is subtracted from the image signal to get the residue component. Then, thebi-orthogonal wavelet transform is applied on the produced Bezier residue component. The resulting transform coefficients are quantized using progressive scalar quantization and the 1 st order polynomial is applied on the quantized LL subband to produce the polynomial surface, then the produced polynomial surface is subtracted from the LL subband to get the residue component (high frequency component). Then, the quantized values are represented using quad tree encoding to prune the sparse blocks, followed by high order shift coding algorithm to handle the remaining statistical redundancy and to attain efficient compression performance. The conducted tests indicated that the introduced system leads to promising compression gain.
This article presents a polynomial-based image compression scheme, which consists of using the color model (YUV) to represent color contents and using two-dimensional polynomial coding (first-order) with variable block size according to correlation between neighbor pixels. The residual part of the polynomial for all bands is analyzed into two parts, most important (big) part, and least important (small) parts. Due to the significant subjective importance of the big group; lossless compression (based on Run-Length spatial coding) is used to represent it. Furthermore, a lossy compression system scheme is utilized to approximately represent the small group; it is based on an error-limited adaptive coding system and using the transform codin
... Show MoreIn today's world, digital image storage and transmission play an essential role,where images are mainly involved in data transfer. Digital images usually take large storage space and bandwidth for transmission, so image compression is important in data communication. This paper discusses a unique and novel lossy image compression approach. Exactly 50% of image pixels are encoded, and other 50% pixels are excluded. The method uses a block approach. Pixels of the block are transformed with a novel transform. Pixel nibbles are mapped as a single bit in a transform table generating more zeros, which helps achieve compression. Later, inverse transform is applied in reconstruction, and a single bit value from the table is rem
... Show MoreIn this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform
Like the digital watermark, which has been highlighted in previous studies, the quantum watermark aims to protect the copyright of any image and to validate its ownership using visible or invisible logos embedded in the cover image. In this paper, we propose a method to include an image logo in a cover image based on quantum fields, where a certain amount of texture is encapsulated to encode the logo image before it is included in the cover image. The method also involves transforming wavelets such as Haar base transformation and geometric transformation. These combination methods achieve a high degree of security and robustness for watermarking technology. The digital results obtained from the experiment show that the values of Peak Sig
... Show MoreThis paper introduces method of image enhancement using the combination of both wavelet and Multiwavelet transformation. New technique is proposed for image enhancement using one smoothing filter.
A critically- Sampled Scheme of preprocessing method is used for computing the Multiwavelet.It is the 2nd norm approximation used to speed the procedures needed for such computation.
An improvement was achieved with the proposed method in comparison with the conventional method.
The performance of this technique has been done by computer using Visual Baisec.6 package.
The denoising of a natural image corrupted by Gaussian noise is a problem in signal or image processing. Much work has been done in the field of wavelet thresholding but most of it was focused on statistical modeling of wavelet coefficients and the optimal choice of thresholds. This paper describes a new method for the suppression of noise in image by fusing the stationary wavelet denoising technique with adaptive wiener filter. The wiener filter is applied to the reconstructed image for the approximation coefficients only, while the thresholding technique is applied to the details coefficients of the transform, then get the final denoised image is obtained by combining the two results. The proposed method was applied by usin
... Show MoreIn this paper, a handwritten digit classification system is proposed based on the Discrete Wavelet Transform and Spike Neural Network. The system consists of three stages. The first stage is for preprocessing the data and the second stage is for feature extraction, which is based on Discrete Wavelet Transform (DWT). The third stage is for classification and is based on a Spiking Neural Network (SNN). To evaluate the system, two standard databases are used: the MADBase database and the MNIST database. The proposed system achieved a high classification accuracy rate with 99.1% for the MADBase database and 99.9% for the MNIST database
The technique of integrate complimentary details from two or more input images is known as image fusion. The fusion image is more informational and will be complete more than any of the original input images. This paper Illustrates implementation and evaluation of fusion techniques used on the Satellite images a high-resolution Panchromatic (Pan) and Multispectral (MS). A new algorithm is proposed to fuse a Pan and MS of the lowresolution images based on combining IHS and Haar wavelet transform.Firstly, this paper clarifies the classical fusion by using IHS transform and Haar wavelet transform individually. Secondly proposition new strategy of combining the two methods. Performance of the proposed method is evalua
... Show MoreAccurate detection of Electro Cardio Graphic (ECG) features is an important demand for medical purposes, therefore an accurate algorithm is required to detect these features. This paper proposes an approach to classify the cardiac arrhythmia from a normal ECG signal based on wavelet decomposition and ID3 classification algorithm. First, ECG signals are denoised using the Discrete Wavelet Transform (DWT) and the second step is extract the ECG features from the processed signal. Interactive Dichotomizer 3 (ID3) algorithm is applied to classify the different arrhythmias including normal case. Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) Arrhythmia Database is used to evaluate the ID3 algorithm. The experimental resul
... Show More