In this research paper, a new blind and robust fingerprint image watermarking scheme based on a combination of dual-tree complex wavelet transform (DTCWT) and discrete cosine transform (DCT) domains is demonstrated. The major concern is to afford a solution in reducing the consequence of geometric attacks. It is due to the fingerprint features that may be impacted by the incorporated watermark, fingerprint rotations, and displacements that result in multiple feature sets. To integrate the bits of the watermark sequence into a differential process, two DCT-transformed sub-vectors are implemented. The initial sub-vectors were obtained by sub-sampling in the host fingerprint image of both real and imaginary parts of the DTCWT wavelet coefficients. The basic difference between the relevant sub-vectors of the watermarked fingerprint image in the extraction stage directly provides the inserted watermark sequence. It is not necessary to extract watermark data from an original fingerprint image. Therefore, the technique suggested is evaluated using 80 fingerprint images from 10 persons, from both CASIA-V5-DB and FVC2002-DB2 fingerprint database. For each person, eight fingerprints are set as the template and the watermark are inserted in each image. A comparison between the obtained results with other geometric robust techniques results is performed afterwards. The comparison results show that the proposed technique has stronger robustness against common image processing processes and geometric attacks such as cropping, resizing, and rotation.
Recognizing cars is a highly difficult task due to the wide variety in the appearance of cars from the same car manufacturer. Therefore, the car logo is the most prominent indicator of the car manufacturer. The captured logo image suffers from several problems, such as a complex background, differences in size and shape, the appearance of noise, and lighting circumstances. To solve these problems, this paper presents an effective technique for extracting and recognizing a logo that identifies a car. Our proposed method includes four stages: First, we apply the k-medoids clustering method to extract the logo and remove the background and noise. Secondly, the logo image is converted to grayscale and also converted to a binary imag
... Show MoreNowadays, huge digital images are used and transferred via the Internet. It has been the primary source of information in several domains in recent years. Blur image is one of the most common difficult challenges in image processing, which is caused via object movement or a camera shake. De-blurring is the main process to restore the sharp original image, so many techniques have been proposed, and a large number of research papers have been published to remove blurring from the image. This paper presented a review for the recent papers related to de-blurring published in the recent years (2017-2020). This paper focused on discussing various strategies related to enhancing the software's for image de-blur.&n
... Show MoreVarious document types play an influential role in a lot of our lives activities today; hence preserving their integrity is an important matter. Such documents have various forms, including texts, videos, sounds, and images. The latter types' authentication will be our concern here in this paper. Images can be handled spatially by doing the proper modification directly on their pixel values or spectrally through conducting some adjustments to some of the addressed coefficients. Due to spectral (frequency) domain flexibility in handling data, the domain coefficients are utilized for the watermark embedding purpose. The integer wavelet transform (IWT), which is a wavelet transform based on the lifting scheme,
... Show MoreImage content verification is to confirm the validity of the images, i.e. . To test if the image has experienced any alteration since it was made. Computerized watermarking has turned into a promising procedure for image content verification in light of its exceptional execution and capacity of altering identification.
In this study, a new scheme for image verification reliant on two dimensional chaotic maps and Discrete Wavelet Transform (DWT) is introduced. Arnold transforms is first applied to Host image (H) for scrambling as a pretreatment stage, then the scrambled host image is partitioned into sub-blocks of size 2×2 in which a 2D DWT is utilized on ea
... Show MoreA new de-blurring technique was proposed in order to reduced or remove the blur in the images. The proposed filter was designed from the Lagrange interpolation calculation with adjusted by fuzzy rules and supported by wavelet decomposing technique. The proposed Wavelet Lagrange Fuzzy filter gives good results for fully and partially blurring region in images.
Even though image retrieval is considered as one of the most important research areas in the last two decades, there is still room for improvement since it is still not satisfying for many users. Two of the major problems which need to be improved are the accuracy and the speed of the image retrieval system, in order to achieve user satisfaction and also to make the image retrieval system suitable for all platforms. In this work, the proposed retrieval system uses features with spatial information to analyze the visual content of the image. Then, the feature extraction process is followed by applying the fuzzy c-means (FCM) clustering algorithm to reduce the search space and speed up the retrieval process. The experimental results show t
... Show MoreAbstract
The Phenomenon of Extremism of Values (Maximum or Rare Value) an important phenomenon is the use of two techniques of sampling techniques to deal with this Extremism: the technique of the peak sample and the maximum annual sampling technique (AM) (Extreme values, Gumbel) for sample (AM) and (general Pareto, exponential) distribution of the POT sample. The cross-entropy algorithm was applied in two of its methods to the first estimate using the statistical order and the second using the statistical order and likelihood ratio. The third method is proposed by the researcher. The MSE comparison coefficient of the estimated parameters and the probability density function for each of the distributions were
... Show MoreIn this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform
The computer vision branch of the artificial intelligence field is concerned with developing algorithms for analyzing video image content. Extracting edge information, which is the essential process in most pictorial pattern recognition problems. A new method of edge detection technique has been introduces in this research, for detecting boundaries.
Selection of typical lossy techniques for encoding edge video images are also discussed in this research. The concentration is devoted to discuss the Block-Truncation coding technique and Discrete Cosine Transform (DCT) coding technique. In order to reduce the volume of pictorial data which one may need to store or transmit,
... Show More