In this paper, a method is proposed to increase the compression ratio for the color images by
dividing the image into non-overlapping blocks and applying different compression ratio for these
blocks depending on the importance information of the block. In the region that contain important
information the compression ratio is reduced to prevent loss of the information, while in the
smoothness region which has not important information, high compression ratio is used .The
proposed method shows better results when compared with classical methods(wavelet and DCT).
A new approach presented in this study to determine the optimal edge detection threshold value. This approach is base on extracting small homogenous blocks from unequal mean targets. Then, from these blocks we generate small image with known edges (edges represent the lines between the contacted blocks). So, these simulated edges can be assumed as true edges .The true simulated edges, compared with the detected edges in the small generated image is done by using different thresholding values. The comparison based on computing mean square errors between the simulated edge image and the produced edge image from edge detector methods. The mean square error computed for the total edge image (Er), for edge regio
... Show MoreVideo copyright protection is the most generally acknowledged method of preventing data piracy. This paper proposes a blind video copyright protection technique based on the Fast Walsh Hadamard Transform (FWHT), Discrete Wavelet Transform (DWT), and Arnold Map. The proposed method chooses only frames with maximum and minimum energy features to host the watermark. It also exploits the advantages of both the fast Walsh Hadamard transform (FWHT) and discrete wavelet transforms (DWT) for watermark embedding. The Arnold map encrypts watermarks before the embedding process and decrypts watermarks after extraction. The results show that the proposed method can achieve a fast embedding time, good transparency, and robustness against various
... Show MoreSome problems want to be solved in image compression to make the process workable and more efficient. Much work had been done in the field of lossy image compression based on wavelet and Discrete Cosine Transform (DCT). In this paper, an efficient image compression scheme is proposed, based on a common encoding transform scheme; It consists of the following steps: 1) bi-orthogonal (tab 9/7) wavelet transform to split the image data into sub-bands, 2) DCT to de-correlate the data, 3) the combined transform stage's output is subjected to scalar quantization before being mapped to positive, 4) and LZW encoding to produce the compressed data. The peak signal-to-noise (PSNR), compression ratio (CR), and compression gain (CG) measures were used t
... Show MoreThe data compression is a very important process in order to reduce the size of a large data to be stored or transported, parametric curves such that Bezier curve is a suitable method to return gradual change and mutability of this data. Ridghelet transform solve the problems in the wavelet transform and it can compress the image well but when it uses with Bezier curve, the equality of compressed image become very well. In this paper, a new compression method is proposed by using Bezier curve with Ridgelet transform on RGB images. The results showed that the proposed method present good performance in both subjective and objective experiments. When the PSNR values equal to (34.2365, 33.4323 and 33.0987), they were increased in the propos
... Show MoreImage compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless com
... Show MoreImage compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eye
... Show More