In this study, an analysis of re-using the JPEG lossy algorithm on the quality of satellite imagery is presented. The standard JPEG compression algorithm is adopted and applied using Irfan view program, the rang of JPEG quality that used is 50-100.Depending on the calculated satellite image quality variation, the maximum number of the re-use of the JPEG lossy algorithm adopted in this study is 50 times. The image quality degradation to the JPEG quality factor and the number of re-use of the JPEG algorithm to store the satellite image is analyzed.
FG Mohammed, HM Al-Dabbas, Science International, 2018 - Cited by 2
Fractal image compression gives some desirable properties like fast decoding image, and very good rate-distortion curves, but suffers from a high encoding time. In fractal image compression a partitioning of the image into ranges is required. In this work, we introduced good partitioning process by means of merge approach, since some ranges are connected to the others. This paper presents a method to reduce the encoding time of this technique by reducing the number of range blocks based on the computing the statistical measures between them . Experimental results on standard images show that the proposed method yields minimize (decrease) the encoding time and remain the quality results passable visually.
This research involves studying the influence of increasing the
number of Gaussian points and the style of their distribution, on a circular exit pupil, on the numerical calculations accuracy of the point spread function for an ideal optical system and another system having focus error of (0.25 A. and 0.5 A. )
It was shown that the accuracy of the results depends on the type of
distributing points on the exit pupil. Also, the accuracy increases with the increase of the number of points (N) and the increase of aberrations which requires on increas (N).
This article presents a polynomial-based image compression scheme, which consists of using the color model (YUV) to represent color contents and using two-dimensional polynomial coding (first-order) with variable block size according to correlation between neighbor pixels. The residual part of the polynomial for all bands is analyzed into two parts, most important (big) part, and least important (small) parts. Due to the significant subjective importance of the big group; lossless compression (based on Run-Length spatial coding) is used to represent it. Furthermore, a lossy compression system scheme is utilized to approximately represent the small group; it is based on an error-limited adaptive coding system and using the transform codin
... Show Moreorder to increase the level of security, as this system encrypts the secret image before sending it through the internet to the recipient (by the Blowfish method). As The Blowfish method is known for its efficient security; nevertheless, the encrypting time is long. In this research we try to apply the smoothing filter on the secret image which decreases its size and consequently the encrypting and decrypting time are decreased. The secret image is hidden after encrypting it into another image called the cover image, by the use of one of these two methods" Two-LSB" or" Hiding most bits in blue pixels". Eventually we compare the results of the two methods to determine which one is better to be used according to the PSNR measurs
Designing machines and equipment for post-harvest operations of agricultural products requires information about their physical properties. The aim of the work was to evaluate the possibility of introducing a new approach to predict the moisture content in bean and corn seeds based on measuring their dimensions using image analysis using artificial neural networks (ANN). Experimental tests were carried out at three levels of wet basis moisture content of seeds: 9, 13 and 17%. The analysis of the results showed a direct relationship between the wet basis moisture content and the main dimensions of the seeds. Based on the statistical analysis of the seed material, it was shown that the characteristics
The need for image compression is always renewed because of its importance in reducing the volume of data; which in turn will be stored in less space and transferred more quickly though the communication channels.
In this paper a low cost color image lossy color image compression is introduced. The RGB image data is transformed to YUV color space, then the chromatic bands U & V are down-sampled using dissemination step. The bi-orthogonal wavelet transform is used to decompose each color sub band, separately. Then, the Discrete Cosine Transform (DCT) is used to encode the Low-Low (LL) sub band. The other wavelet sub bands are coded using scalar Quantization. Also, the quad tree coding process was applied on the outcomes of DCT and
Raw satellite images are considered high in resolution, especially multispectral images captured by remote sensing satellites. Hence, choosing the suitable compression technique for such images should be carefully considered, especially on-board small satellites, due to the limited resources. This paper presents an overview and classification of the major and state-of-the-art compression techniques utilized in most space missions launched during the last few decades, such as the Discrete Cosine Transform (DCT) and the Discrete Wavelet Transform (DWT)-based compression techniques. The pros and cons of the onboard compression methods are presented, giving their specifications and showing the differences among them to provide uni
... Show MoreThe main aim of image compression is to reduce the its size to be able for transforming and storage, therefore many methods appeared to compress the image, one of these methods is "Multilayer Perceptron ". Multilayer Perceptron (MLP) method which is artificial neural network based on the Back-Propagation algorithm for compressing the image. In case this algorithm depends upon the number of neurons in the hidden layer only the above mentioned will not be quite enough to reach the desired results, then we have to take into consideration the standards which the compression process depend on to get the best results. We have trained a group of TIFF images with the size of (256*256) in our research, compressed them by using MLP for each
... Show More