The technique of integrate complimentary details from two or more input images is known as image fusion. The fusion image is more informational and will be complete more than any of the original input images. This paper Illustrates implementation and evaluation of fusion techniques used on the Satellite images a high-resolution Panchromatic (Pan) and Multispectral (MS). A new algorithm is proposed to fuse a Pan and MS of the lowresolution images based on combining IHS and Haar wavelet transform.Firstly, this paper clarifies the classical fusion by using IHS transform and Haar wavelet transform individually. Secondly proposition new strategy of combining the two methods. Performance of the proposed method is evaluated with the help of assessment parameter such as Mean Square Error and Peak Signal to Noise Ratio. Experiment results shows that the proposed algorithm has higher performance than the classical fusion by IHS transform.
The present work aims to study the effect of using an automatic thresholding technique to convert the features edges of the images to binary images in order to split the object from its background, where the features edges of the sampled images obtained from first-order edge detection operators (Roberts, Prewitt and Sobel) and second-order edge detection operators (Laplacian operators). The optimum automatic threshold are calculated using fast Otsu method. The study is applied on a personal image (Roben) and a satellite image to study the compatibility of this procedure with two different kinds of images. The obtained results are discussed.
It is known that images differ from texts in many aspects, such as high repetition and correlation, local structure, capacitance characteristics and frequency. As a result, traditional encryption methods can not be applied to images. In this paper we present a method for designing a simple and efficient messy system using a difference in the output sequence. To meet the requirements of image encryption, we create a new coding system for linear and nonlinear structures based on the generation of a new key based on chaotic maps.
The design uses a kind of chaotic maps including the Chebyshev 1D map, depending on the parameters, for a good random appearance. The output is a test in several measurements, including the complexity of th
... Show MoreLK Abood, RA Ali, M Maliki, International Journal of Science and Research, 2015 - Cited by 2
Data compression offers an attractive approach to reducing communication costs using available bandwidth effectively. It makes sense to pursue research on developing algorithms that can most effectively use available network. It is also important to consider the security aspect of the data being transmitted is vulnerable to attacks. The basic aim of this work is to develop a module for combining the operation of compression and encryption on the same set of data to perform these two operations simultaneously. This is achieved through embedding encryption into compression algorithms since both cryptographic ciphers and entropy coders bear certain resemblance in the sense of secrecy. First in the secure compression module, the given text is p
... Show MoreLong memory analysis is one of the most active areas in econometrics and time series where various methods have been introduced to identify and estimate the long memory parameter in partially integrated time series. One of the most common models used to represent time series that have a long memory is the ARFIMA (Auto Regressive Fractional Integration Moving Average Model) which diffs are a fractional number called the fractional parameter. To analyze and determine the ARFIMA model, the fractal parameter must be estimated. There are many methods for fractional parameter estimation. In this research, the estimation methods were divided into indirect methods, where the Hurst parameter is estimated fir
... Show MoreIdentifying people by their ear has recently received import attention in the literature. The accurate segmentation of the ear region is vital in order to make successful person identification decisions. This paper presents an effective approach for ear region segmentation from color ear images. Firstly, the RGB color model was converted to the HSV color model. Secondly, thresholding was utilized to segment the ear region. Finally, the morphological operations were applied to remove small islands and fill the gaps. The proposed method was tested on a database which consisted of 105 ear images taken from the right sides of 105 subjects. The experimental results of the proposed approach on a variety of ear images revealed that this approac
... Show MoreDigital images are open to several manipulations and dropped cost of compact cameras and mobile phones due to the robust image editing tools. Image credibility is therefore become doubtful, particularly where photos have power, for instance, news reports and insurance claims in a criminal court. Images forensic methods therefore measure the integrity of image by apply different highly technical methods established in literatures. The present work deals with copy move forgery images of Media Integration and Communication Center Forgery (MICC-F2000) dataset for detecting and revealing the areas that have been tampered portion in the image, the image is sectioned into non overlapping blocks using Simple
... Show MoreSemantic segmentation is effective in numerous object classification tasks such as autonomous vehicles and scene understanding. With the advent in the deep learning domain, lots of efforts are seen in applying deep learning algorithms for semantic segmentation. Most of the algorithms gain the required accuracy while compromising on their storage and computational requirements. The work showcases the implementation of Convolutional Neural Network (CNN) using Discrete Cosine Transform (DCT), where DCT exhibit exceptional energy compaction properties. The proposed Adaptive Weight Wiener Filter (AWWF) rearranges the DCT coefficients by truncating the high frequency coefficients. AWWF-DCT model reinstate the convolutional l
... Show MoreThe Normalized Difference Vegetation Index (NDVI) is commonly used as a measure of land surface greenness based on the assumption that NDVI value is positively proportional to the amount of green vegetation in an image pixel area. The Normalized Difference Vegetation Index data set of Landsat based on the remote sensing information is used to estimate the area of plant cover in region west of Baghdad during 1990-2001. The results show that in the period of 1990 and 2001 the plant area in region of Baghdad increased from (44760.25) hectare to (75410.67) hectare. The vegetation area increased during the period 1990-2001, and decreases the exposed area.
A signature is a special identifier that confirms a person's identity and distinguishes him or her from others. The main goal of this paper is to present a deep study of the spatial density distribution method and the effect of a mass-based segmentation algorithm on its performance while it is being used to recognize handwritten signatures in an offline mode. The methodology of the algorithm is based on dividing the image of the signature into tiles that reflect the shape and geometry of the signature, and then extracting five spatial features from each of these tiles. Features include the mass of each tile, the relative mean, and the relative standard deviation for the vertical and horizontal projections of that tile. In the clas
... Show More