The denoising of a natural image corrupted by Gaussian noise is a problem in signal or image processing. Much work has been done in the field of wavelet thresholding but most of it was focused on statistical modeling of wavelet coefficients and the optimal choice of thresholds. This paper describes a new method for the suppression of noise in image by fusing the stationary wavelet denoising technique with adaptive wiener filter. The wiener filter is applied to the reconstructed image for the approximation coefficients only, while the thresholding technique is applied to the details coefficients of the transform, then get the final denoised image is obtained by combining the two results. The proposed method was applied by using MATLAB R2010a with color images contaminated by white Gaussian noise. Compared with stationary wavelet and wiener filter algorithms, the experimental results show that the proposed method provides better subjective and objective quality, and obtain up to 3.5 dB PSNR improvement.
Image combination is a technique that fuses two or more medical images taken with different conditions or imaging devices into a single image contain complete information. In this study relied on mathematical, statistical and spatial techniques, to fuse MRI images that captured horizontal and vertical times (T1, T2), and applied a method of supervised classification based on the minimum distance before and after combination process, then examine the quality of the resulting image based on the statistical standards resulting from the analysis of edge analysis, showing the results to identify the best techniques adopted in combination process, determine the exact details in each class and between classes.
In the current research work, a method to reduce the color levels of the pixels within digital images was proposed. The recent strategy was based on self organization map neural network method (SOM). The efficiency of recent method was compared with the well known logarithmic methods like Floyd-Steinberg (Halftone) dithering and Octtrees (Quadtrees) methods. Experimental results have shown that by adjusting the sampling factor can produce higher-quality images with no much longer run times, or some better quality with shorter running times than existing methods. This observation refutes the repeated neural networks is necessarily slow but have best results. The generated quantization map can be exploited for color image compression, clas
... Show MoreSubcutaneous vascularization has become a new solution for identification management over the past few years. Systems based on dorsal hand veins are particularly promising for high-security settings. The dorsal hand vein recognition system comprises the following steps: acquiring images from the database and preprocessing them, locating the region of interest, and extracting and recognizing information from the dorsal hand vein pattern. This paper reviewed several techniques for obtaining the dorsal hand vein area and identifying a person. Therefore, this study just provides a comprehensive review of existing previous theories. This model aims to offer the improvement in the accuracy rate of the system that was shown in previous studies and
... Show MoreThis paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one
... Show MoreIn this paper, membrane-based computing image segmentation, both region-based and edge-based, is proposed for medical images that involve two types of neighborhood relations between pixels. These neighborhood relations—namely, 4-adjacency and 8-adjacency of a membrane computing approach—construct a family of tissue-like P systems for segmenting actual 2D medical images in a constant number of steps; the two types of adjacency were compared using different hardware platforms. The process involves the generation of membrane-based segmentation rules for 2D medical images. The rules are written in the P-Lingua format and appended to the input image for visualization. The findings show that the neighborhood relations between pixels o
... Show MoreThe main aim of image compression is to reduce the its size to be able for transforming and storage, therefore many methods appeared to compress the image, one of these methods is "Multilayer Perceptron ". Multilayer Perceptron (MLP) method which is artificial neural network based on the Back-Propagation algorithm for compressing the image. In case this algorithm depends upon the number of neurons in the hidden layer only the above mentioned will not be quite enough to reach the desired results, then we have to take into consideration the standards which the compression process depend on to get the best results. We have trained a group of TIFF images with the size of (256*256) in our research, compressed them by using MLP for each
... Show MoreJPEG is most popular image compression and encoding, this technique is widely used in many applications (images, videos and 3D animations). Meanwhile, researchers are very interested to develop this massive technique to compress images at higher compression ratios with keeping image quality as much as possible. For this reason in this paper we introduce a developed JPEG based on fast DCT and removed most of zeros and keeps their positions in a transformed block. Additionally, arithmetic coding applied rather than Huffman coding. The results showed up, the proposed developed JPEG algorithm has better image quality than traditional JPEG techniques.
Fuzzy Based Clustering for Grayscale Image Steganalysis
Abstract
The problem of missing data represents a major obstacle before researchers in the process of data analysis in different fields since , this problem is a recurrent one in all fields of study including social , medical , astronomical and clinical experiments .
The presence of such a problem within the data to be studied may influence negatively on the analysis and it may lead to misleading conclusions , together with the fact that these conclusions that result from a great bias caused by that problem in spite of the efficiency of wavelet methods but they are also affected by the missing of data , in addition to the impact of the problem of miss of accuracy estimation
... Show More