In this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The performance of the proposed algorithm is evaluated using detection
techniques such as Peak Signal- to- Noise Ratio (PSNR) to measure the distortion,
Similarity Correlation between the cover-image and watermarked image, and Bit
Error Rate (BER) is used to measure the robustness. The sensitivity against attacks on
the watermarked image is investigated. The types of attacks applied are: Laplacian
sharpening, Median filtering, Salt & Peppers Noise and Rotating attack. The results
show that the proposed algorithm can resist Laplacain sharpening with any sharpening
parameter k, besides laplacian good result according to some other types of attacks is
achieved.
A new approach presented in this study to determine the optimal edge detection threshold value. This approach is base on extracting small homogenous blocks from unequal mean targets. Then, from these blocks we generate small image with known edges (edges represent the lines between the contacted blocks). So, these simulated edges can be assumed as true edges .The true simulated edges, compared with the detected edges in the small generated image is done by using different thresholding values. The comparison based on computing mean square errors between the simulated edge image and the produced edge image from edge detector methods. The mean square error computed for the total edge image (Er), for edge regio
... Show MoreDue to the vast using of digital images and the fast evolution in computer science and especially the using of images in the social network.This lead to focus on securing these images and protect it against attackers, many techniques are proposed to achieve this goal. In this paper we proposed a new chaotic method to enhance AES (Advanced Encryption Standards) by eliminating Mix-Columns transformation to reduce time consuming and using palmprint biometric and Lorenz chaotic system to enhance authentication and security of the image, by using chaotic system that adds more sensitivity to the encryption system and authentication for the system.
This paper determined the difference between the first image of the natural and the second infected image by using logic gates. The proposed algorithm was applied in the first time with binary image, the second time in the gray image, and in the third time in the color image. At start of proposed algorithm the process images by applying convolution to extended images with zero to obtain more vision and features then enhancements images by Edge detection filter (laplacion operator) and smoothing images by using mean filter ,In order to determine the change between the original image and the injury the logic gates applied specially X-OR gates . Applying the technique for tooth decay through this comparison can locate inj
... Show MoreImage processing is an important source for the image
analytical in order to get variable parameters such as the
intensity .In present work it has been found a relation between the tensity and number of pixd in the image , and from this relation we have got in this paper the inten
... Show MoreThe wavelet transform has become a useful computational tool for a variety of signal and image processing applications.
The aim of this paper is to present the comparative study of various wavelet filters. Eleven different wavelet filters (Haar, Mallat, Symlets, Integer, Conflict, Daubechi 1, Daubechi 2, Daubechi 4, Daubechi 7, Daubechi 12 and Daubechi 20) are used to compress seven true color images of 256x256 as a samples. Image quality, parameters such as peak signal-to-noise ratio (PSNR), normalized mean square error have been used to evaluate the performance of wavelet filters.
In our work PSNR is used as a measure of accuracy performanc
... Show MoreThe computer vision branch of the artificial intelligence field is concerned with developing algorithms for analyzing video image content. Extracting edge information, which is the essential process in most pictorial pattern recognition problems. A new method of edge detection technique has been introduces in this research, for detecting boundaries.
Selection of typical lossy techniques for encoding edge video images are also discussed in this research. The concentration is devoted to discuss the Block-Truncation coding technique and Discrete Cosine Transform (DCT) coding technique. In order to reduce the volume of pictorial data which one may need to store or transmit,
... Show MoreIn the reverse engineering approach, a massive amount of point data is gathered together during data acquisition and this leads to larger file sizes and longer information data handling time. In addition, fitting of surfaces of these data point is time-consuming and demands particular skills. In the present work a method for getting the control points of any profile has been presented. Where, many process for an image modification was explained using Solid Work program, and a parametric equation of the profile that proposed has been derived using Bezier technique with the control points that adopted. Finally, the proposed profile was machined using 3-aixs CNC milling machine and a compression in dimensions process has been occurred betwe
... Show MoreOne of the significant stages in computer vision is image segmentation which is fundamental for different applications, for example, robot control and military target recognition, as well as image analysis of remote sensing applications. Studies have dealt with the process of improving the classification of all types of data, whether text or audio or images, one of the latest studies in which researchers have worked to build a simple, effective, and high-accuracy model capable of classifying emotions from speech data, while several studies dealt with improving textual grouping. In this study, we seek to improve the classification of image division using a novel approach depending on two methods used to segment the images. The first
... Show More