In this paper three techniques for image compression are implemented. The proposed techniques consist of three dimension (3-D) two level discrete wavelet transform (DWT), 3-D two level discrete multi-wavelet transform (DMWT) and 3-D two level hybrid (wavelet-multiwavelet transform) technique. Daubechies and Haar are used in discrete wavelet transform and Critically Sampled preprocessing is used in discrete multi-wavelet transform. The aim is to maintain to increase the compression ratio (CR) with respect to increase the level of the transformation in case of 3-D transformation, so, the compression ratio is measured for each level. To get a good compression, the image data properties, were measured, such as, image entropy (He), percent root-mean-square difference (PRD %), energy retained (Er) and Peak Signal to Noise Ratio (PSNR). Based on testing results, a comparison between the three techniques is presented. CR in the three techniques is the same and has the largest value in the 2nd level of 3-D. The hybrid technique has the highest PSNR values in the 1st and 2nd level of 3-D and has the lowest values of (PRD %). so, the 3-D 2-level hybrid is the best technique for image compression.
Today, the use of iris recognition is expanding globally as the most accurate and reliable biometric feature in terms of uniqueness and robustness. The motivation for the reduction or compression of the large databases of iris images becomes an urgent requirement. In general, image compression is the process to remove the insignificant or redundant information from the image details, that implicitly makes efficient use of redundancy embedded within the image itself. In addition, it may exploit human vision or perception limitations to reduce the imperceptible information.
This paper deals with reducing the size of image, namely reducing the number of bits required in representing the
In today's world, digital image storage and transmission play an essential role,where images are mainly involved in data transfer. Digital images usually take large storage space and bandwidth for transmission, so image compression is important in data communication. This paper discusses a unique and novel lossy image compression approach. Exactly 50% of image pixels are encoded, and other 50% pixels are excluded. The method uses a block approach. Pixels of the block are transformed with a novel transform. Pixel nibbles are mapped as a single bit in a transform table generating more zeros, which helps achieve compression. Later, inverse transform is applied in reconstruction, and a single bit value from the table is rem
... Show MoreThe problem of Bi-level programming is to reduce or maximize the function of the target by having another target function within the constraints. This problem has received a great deal of attention in the programming community due to the proliferation of applications and the use of evolutionary algorithms in addressing this kind of problem. Two non-linear bi-level programming methods are used in this paper. The goal is to achieve the optimal solution through the simulation method using the Monte Carlo method using different small and large sample sizes. The research reached the Branch Bound algorithm was preferred in solving the problem of non-linear two-level programming this is because the results were better.
Currently, with the huge increase in modern communication and network applications, the speed of transformation and storing data in compact forms are pressing issues. Daily an enormous amount of images are stored and shared among people every moment, especially in the social media realm, but unfortunately, even with these marvelous applications, the limited size of sent data is still the main restriction's, where essentially all these applications utilized the well-known Joint Photographic Experts Group (JPEG) standard techniques, in the same way, the need for construction of universally accepted standard compression systems urgently required to play a key role in the immense revolution. This review is concerned with Different
... Show MoreA number of compression schemes were put forward to achieve high compression factors with high image quality at a low computational time. In this paper, a combined transform coding scheme is proposed which is based on discrete wavelet (DWT) and discrete cosine (DCT) transforms with an added new enhancement method, which is the sliding run length encoding (SRLE) technique, to further improve compression. The advantages of the wavelet and the discrete cosine transforms were utilized to encode the image. This first step involves transforming the color components of the image from RGB to YUV planes to acquire the advantage of the existing spectral correlation and consequently gaining more compression. DWT is then applied to the Y, U and V col
... Show MoreImage is an important digital information that used in many internet of things (IoT) applications such as transport, healthcare, agriculture, military, vehicles and wildlife. etc. Also, any image has very important characteristic such as large size, strong correlation and huge redundancy, therefore, encrypting it by using single key Advanced Encryption Standard (AES) through IoT communication technologies makes it vulnerable to many threats, thus, the pixels that have the same values will be encrypted to another pixels that have same values when they use the same key. The contribution of this work is to increase the security of transferred image. This paper proposed multiple key AES algorithm (MECCAES) to improve the security of the tran
... Show MoreA new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show MoreExamining and comparing the image quality of degenerative cervical spine diseases through the application of three MRI sequences; the Two-Dimension T2 Weighed Turbo Spin Echo (2D T2W TSE), the Three-Dimension T2 Weighted Turbo Spin Echo (3D T2W TSE), and the T2 Turbo Field Echo (T2_TFE). Thirty-three patients who were diagnosed as having degenerative cervical spine diseases were involved in this study. Their age range was 40-60 years old. The images were produced via a 1.5 Tesla MRI device using (2D T2W TSE, 3D T2W TSE, and T2_TFE) sequences in the sagittal plane. The image quality was examined by objective and subjective assessments. The MRI image characteristics of the cervical spines (C4-C5, C5-C6, C6-C7) showed significant difference
... Show MoreThe computer vision branch of the artificial intelligence field is concerned with developing algorithms for analyzing video image content. Extracting edge information, which is the essential process in most pictorial pattern recognition problems. A new method of edge detection technique has been introduces in this research, for detecting boundaries.
Selection of typical lossy techniques for encoding edge video images are also discussed in this research. The concentration is devoted to discuss the Block-Truncation coding technique and Discrete Cosine Transform (DCT) coding technique. In order to reduce the volume of pictorial data which one may need to store or transmit,
... Show More