In this paper a new fusion method is proposed to fuse multiple satellite images that are acquired through different electromagnetic spectrum ranges to produce a single gray scale image. The proposed method based on desecrate wavelet transform using pyramid and packet bases, the fusion process preformed using two different fusion rules, where the low frequency part is remapped through the use of PCA analysis basing on covariance matrix and correlation matrix, and the high frequency part is fused using different fusion rules (adding, selecting the higher, replacement), then the restored image is obtained by applying the inverse desecrate wavelet transform. The experimental results show the validity of the proposed fusion method to fuse such images with equally representation comparing with the general wavelet fusion method that fuses the high frequency parts only.
There are many techniques for face recognition which compare the desired face image with a set of faces images stored in a database. Most of these techniques fail if faces images are exposed to high-density noise. Therefore, it is necessary to find a robust method to recognize the corrupted face image with a high density noise. In this work, face recognition algorithm was suggested by using the combination of de-noising filter and PCA. Many studies have shown that PCA has ability to solve the problem of noisy images and dimensionality reduction. However, in cases where faces images are exposed to high noise, the work of PCA in removing noise is useless, therefore adding a strong filter will help to im
... Show MoreDigital image is widely used in computer applications. This paper introduces a proposed method of image zooming based upon inverse slantlet transform and image scaling. Slantlet transform (SLT) is based on the principle of designing different filters for different scales.
First we apply SLT on color image, the idea of transform color image into slant, where large coefficients are mainly the signal and smaller one represent the noise. By suitably modifying these coefficients , using scaling up image by box and Bartlett filters so that the image scales up to 2X2 and then inverse slantlet transform from modifying coefficients using to the reconstructed image .
&nbs
... Show MoreSteganography is a mean of hiding information within a more obvious form of
communication. It exploits the use of host data to hide a piece of information in such a way
that it is imperceptible to human observer. The major goals of effective Steganography are
High Embedding Capacity, Imperceptibility and Robustness. This paper introduces a scheme
for hiding secret images that could be as much as 25% of the host image data. The proposed
algorithm uses orthogonal discrete cosine transform for host image. A scaling factor (a) in
frequency domain controls the quality of the stego images. Experimented results of secret
image recovery after applying JPEG coding to the stego-images are included.
A new data for Fusion power density has been obtained for T-3He and T-T fusion reactions, power density is a substantial term in the researches related to the fusion energy generation and ignition calculations of magnetic confined systems. In the current work, thermal nuclear reactivities, power densities of a fusion reactors and the ignition condition inquiry are achieved by using a new and accurate formula of cross section, the maximum values of fusion power density for T-3He and TT reaction are 1.1×107 W/m3 at T=700 KeV and 4.7×106 W/m3 at T=500 KeV respectively, While Zeff suggested to be 1.44 for the two reactions. Bremsstrahlung radiation has also been determined to reaching self- sustaining reactors, Bremsstrahlung values are 4.5×
... Show MoreIn today's world, digital image storage and transmission play an essential role,where images are mainly involved in data transfer. Digital images usually take large storage space and bandwidth for transmission, so image compression is important in data communication. This paper discusses a unique and novel lossy image compression approach. Exactly 50% of image pixels are encoded, and other 50% pixels are excluded. The method uses a block approach. Pixels of the block are transformed with a novel transform. Pixel nibbles are mapped as a single bit in a transform table generating more zeros, which helps achieve compression. Later, inverse transform is applied in reconstruction, and a single bit value from the table is rem
... Show MoreSpeech is the first invented way of communication that human used age before the invention of writing. In this paper, proposed method for speech analyses to extract features by using multiwavelet Transform (Repeated Row Preprocessing).The proposed system depends on the Euclidian differences of the coefficients of the multiwavelet Transform to determine the beast features of speech recognition. Each sample value in the reference file is computed by taking the average value of four samples for the same data (four speakers for the same phoneme). The result of the input data to every frame value in the reference file using the Euclidian distance to determine the frame with the minimum distance is said to be the "Best Match". Simulatio
... Show MoreThe complexity of multimedia contents is significantly increasing in the current world. This leads to an exigent demand for developing highly effective systems to satisfy human needs. Until today, handwritten signature considered an important means that is used in banks and businesses to evidence identity, so there are many works tried to develop a method for recognition purpose. This paper introduced an efficient technique for offline signature recognition depending on extracting the local feature by utilizing the haar wavelet subbands and energy. Three different sets of features are utilized by partitioning the signature image into non overlapping blocks where different block sizes are used. CEDAR signature database is used as a dataset f
... Show MoreA new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show More