Image compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless compression scheme of first stage that corresponding to second stage. The tested results shown are promising in both two stages, that implicilty enhanced the performance of traditional polynomial model in terms of compression ratio , and preresving image quality.
In the reverse engineering approach, a massive amount of point data is gathered together during data acquisition and this leads to larger file sizes and longer information data handling time. In addition, fitting of surfaces of these data point is time-consuming and demands particular skills. In the present work a method for getting the control points of any profile has been presented. Where, many process for an image modification was explained using Solid Work program, and a parametric equation of the profile that proposed has been derived using Bezier technique with the control points that adopted. Finally, the proposed profile was machined using 3-aixs CNC milling machine and a compression in dimensions process has been occurred betwe
... Show MoreGroupwise non-rigid image alignment is a difficult non-linear optimization problem involving many parameters and often large datasets. Previous methods have explored various metrics and optimization strategies. Good results have been previously achieved with simple metrics, requiring complex optimization, often with many unintuitive parameters that require careful tuning for each dataset. In this chapter, the problem is restructured to use a simpler, iterative optimization algorithm, with very few free parameters. The warps are refined using an iterative Levenberg-Marquardt minimization to the mean, based on updating the locations of a small number of points and incorporating a stiffness constraint. This optimization approach is eff
... Show MoreOne of the significant stages in computer vision is image segmentation which is fundamental for different applications, for example, robot control and military target recognition, as well as image analysis of remote sensing applications. Studies have dealt with the process of improving the classification of all types of data, whether text or audio or images, one of the latest studies in which researchers have worked to build a simple, effective, and high-accuracy model capable of classifying emotions from speech data, while several studies dealt with improving textual grouping. In this study, we seek to improve the classification of image division using a novel approach depending on two methods used to segment the images. The first
... Show MoreWith the increased development in digital media and communication, the need for methods to protection and security became very important factor, where the exchange and transmit date over communication channel led to make effort to protect these data from unauthentication access.
This paper present a new method to protect color image from unauthentication access using watermarking. The watermarking algorithm hide the encoded mark image in frequency domain using Discrete Cosine Transform. The main principle of the algorithm is encode frequent mark in cover color image. The watermark image bits are spread by repeat the mark and arrange in encoded method that provide algorithm more robustness and security. The propos
... Show More