Fractal image compression depends on representing an image using affine transformations. The main concern for researches in the discipline of fractal image compression (FIC) algorithm is to decrease encoding time needed to compress image data. The basic technique is that each portion of the image is similar to other portions of the same image. In this process, there are many models that were developed. The presence of fractals was initially noticed and handled using Iterated Function System (IFS); that is used for encoding images. In this paper, a review of fractal image compression is discussed with its variants along with other techniques. A summarized review of contributions is achieved to determine the fulfillment of fractal image compression, specifically for the block indexing methods based on the moment descriptor. Block indexing method depends on classifying the domain and range blocks using moments to generate an invariant descriptor that reduces the long encoding time. A comparison is performed between the blocked indexing technology and other fractal image techniques to determine the importance of block indexing in saving encoding time and achieving better compression ratio while maintaining image quality on Lena image.
Background: Multiple sclerosis (MS) is an inflammatory disease of the central nervous system, in which the myelin sheaths got injured. The prevalence of MS is on grow, as well as, it affects the young ages. Females are most common to have MS compared to males. Oxidative stress is the situation of imbalance between oxidants (free radicals and reactive oxygen species (ROS)) and antioxidants in a living system, in which either the oxidants are elevated or antioxidants are reduced, or sometimes both. ROS and oxidative stress have been implicated in the progression of many degenerative diseases, which is important in cracking the unrevealed mysteries of MS. In this review article, some of the proposed mechanisms that link oxidative stres
... Show MoreFor the duration of the last few many years many improvement in computer technology, software program programming and application production had been followed with the aid of diverse engineering disciplines. Those trends are on the whole focusing on synthetic intelligence strategies. Therefore, a number of definitions are supplied, which recognition at the concept of artificial intelligence from exclusive viewpoints. This paper shows current applications of artificial intelligence (AI) that facilitate cost management in civil engineering tasks. An evaluation of the artificial intelligence in its precise partial branches is supplied. These branches or strategies contributed to the creation of a sizable group of fashions s
... Show MoreRecently, Image enhancement techniques can be represented as one of the most significant topics in the field of digital image processing. The basic problem in the enhancement method is how to remove noise or improve digital image details. In the current research a method for digital image de-noising and its detail sharpening/highlighted was proposed. The proposed approach uses fuzzy logic technique to process each pixel inside entire image, and then take the decision if it is noisy or need more processing for highlighting. This issue is performed by examining the degree of association with neighboring elements based on fuzzy algorithm. The proposed de-noising approach was evaluated by some standard images after corrupting them with impulse
... Show MoreBackground: Determination of sex and estimation of stature from the skeleton is vital to medicolegal investigations. Skull is composed of hard tissue and is the best preserved part of skeleton after death, hence, in many cases it is the only available part for forensic examination. Lateral cephalogram is ideal for the skull examination as it gives details of various anatomical points in a single radiograph. This study was undertaken to evaluate the accuracy of digital cephalometric system as quick, easy and reproducible supplement tool in sex determination in Iraqi samples in different age range using certain linear and angular craniofacial measurements in predicting sex. Materials and Method The sample consisted of 113of true lateral cepha
... Show MoreSteganography involves concealing information by embedding data within cover media and it can be categorized into two main domains: spatial and frequency. This paper presents two distinct methods. The first is operating in the spatial domain which utilizes the least significant bits (LSBs) to conceal a secret message. The second method is the functioning in the frequency domain which hides the secret message within the LSBs of the middle-frequency band of the discrete cosine transform (DCT) coefficients. These methods enhance obfuscation by utilizing two layers of randomness: random pixel embedding and random bit embedding within each pixel. Unlike other available methods that embed data in sequential order with a fixed amount.
... Show MoreIn this paper the behavior of the quality of the gradient that implemented on an image as a function of noise error is presented. The cross correlation coefficient (ccc) between the derivative of the original image before and after introducing noise error shows dramatic decline compared with the corresponding images before taking derivatives. Mathematical equations have been constructed to control the relation between (ccc) and the noise parameter.
Like the digital watermark, which has been highlighted in previous studies, the quantum watermark aims to protect the copyright of any image and to validate its ownership using visible or invisible logos embedded in the cover image. In this paper, we propose a method to include an image logo in a cover image based on quantum fields, where a certain amount of texture is encapsulated to encode the logo image before it is included in the cover image. The method also involves transforming wavelets such as Haar base transformation and geometric transformation. These combination methods achieve a high degree of security and robustness for watermarking technology. The digital results obtained from the experiment show that the values of Peak Sig
... Show MoreIn this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The perform
This paper suggest two method of recognition, these methods depend on the extraction of the feature of the principle component analysis when applied on the wavelet domain(multi-wavelet). First method, an idea of increasing the space of recognition, through calculating the eigenstructure of the diagonal sub-image details at five depths of wavelet transform is introduced. The effective eigen range selected here represent the base for image recognition. In second method, an idea of obtaining invariant wavelet space at all projections is presented. A new recursive from that represents invariant space of representing any image resolutions obtained from wavelet transform is adopted. In this way, all the major problems that effect the image and
... Show More