This paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one pixel-wide lines. Finally, the Fusion technique was used to merge the results of the Histogram Equalization process with the Skeletonization process to obtain the new high contrast images. The proposed method was tested in different quality images from National Institute of Standard and Technology (NIST) special database 14. The experimental results are very encouraging and the current enhancement method appeared to be effective by improving different quality images.
This study explores the challenges in Artificial Intelligence (AI) systems in generating image captions, a task that requires effective integration of computer vision and natural language processing techniques. A comparative analysis between traditional approaches such as retrieval- based methods and linguistic templates) and modern approaches based on deep learning such as encoder-decoder models, attention mechanisms, and transformers). Theoretical results show that modern models perform better for the accuracy and the ability to generate more complex descriptions, while traditional methods outperform speed and simplicity. The paper proposes a hybrid framework that combines the advantages of both approaches, where conventional methods prod
... Show MoreA simple and novel method was developed by combination of dispersive liquid-liquid microextraction with UV spectrophotometry for the preconcentartion and determination of trace amount of malathion. The presented method is based on using a small volume of ethylenechloride as the extraction solvent was dissolved in ethanol as the dispersive solvent, then the binary solution was rapidly injected by a syringe into the water sample containing malathion. The important parameters, such the type and volume of extraction solvent and disperser solvent, the effect of extraction time and rate, the effect of salt addition and reaction conditions were studied. At the optimum conditions, the calibration graph was linear in the range of 2-100 ng mL-1 of ma
... Show MoreIn the latest years there has been a profound evolution in computer science and technology, which incorporated several fields. Under this evolution, Content Base Image Retrieval (CBIR) is among the image processing field. There are several image retrieval methods that can easily extract feature as a result of the image retrieval methods’ progresses. To the researchers, finding resourceful image retrieval devices has therefore become an extensive area of concern. Image retrieval technique refers to a system used to search and retrieve images from digital images’ huge database. In this paper, the author focuses on recommendation of a fresh method for retrieving image. For multi presentation of image in Convolutional Neural Network (CNN),
... Show MoreThe concealment of data has emerged as an area of deep and wide interest in research that endeavours to conceal data in a covert and stealth manner, to avoid detection through the embedment of the secret data into cover images that appear inconspicuous. These cover images may be in the format of images or videos used for concealment of the messages, yet still retaining the quality visually. Over the past ten years, there have been numerous researches on varying steganographic methods related to images, that emphasised on payload and the quality of the image. Nevertheless, a compromise exists between the two indicators and to mediate a more favourable reconciliation for this duo is a daunting and problematic task. Additionally, the current
... Show MoreCompressing an image and reconstructing it without degrading its original quality is one of the challenges that still exist now a day. A coding system that considers both quality and compression rate is implemented in this work. The implemented system applies a high synthetic entropy coding schema to store the compressed image at the smallest size as possible without affecting its original quality. This coding schema is applied with two transform-based techniques, one with Discrete Cosine Transform and the other with Discrete Wavelet Transform. The implemented system was tested with different standard color images and the obtained results with different evaluation metrics have been shown. A comparison was made with some previous rel
... Show MoreThis paper includes a comparison between denoising techniques by using statistical approach, principal component analysis with local pixel grouping (PCA-LPG), this procedure is iterated second time to further improve the denoising performance, and other enhancement filters were used. Like adaptive Wiener low pass-filter to a grayscale image that has been degraded by constant power additive noise, based on statistics estimated from a local neighborhood of each pixel. Performs Median filter of the input noisy image, each output pixel contains the Median value in the M-by-N neighborhood around the corresponding pixel in the input image, Gaussian low pass-filter and Order-statistic filter also be used.
Experimental results shows LPG-
... Show MoreUltrasound has been used as a diagnostic modality for many intraocular diseases, due its safety, low cost, real time and wide availability. Unfortunately, ultrasound images suffer from speckle artifact that are tissue dependent. In this work, we will offer a method to reduce speckle noise and improve ultrasound image to raise the human diagnostic performance. This method combined undecimated wavelet transform with a wavelet coefficient mapping function: where UDWT used to eliminate the noise and a wavelet coefficient mapping function used to enhance the contrast of denoised images obtained from the first component. This methods can be used not only as a means for improving visual quality of medical images but also as a preprocessing
... Show More
The mechanism of hydrogen (H2) gas sensor in the range of 50-200 ppm of RF-sputtered annealed zinc oxide (ZnO) and without annealing was studied. The X-ray Diffraction( XRD) results showed that the Zn metal was completely converted to ZnO with a polycrystalline structure. The I–V characteristics of the device (PT/ZnO/Pt) measured at room temperature before and after annealing at 450 oC for4h, from which a linear relationship has been observed. The sensors had a maximum response to H2 at 350 oC for annealing ZnO and showed stable behavior for detecting H2 gases in the range of 50 to 200 ppm. The annealed film exhibited hig |
This paper includes a comparison between denoising techniques by using statistical approach, principal component analysis with local pixel grouping (PCA-LPG), this procedure is iterated second time to further improve the denoising performance, and other enhancement filters were used. Like adaptive Wiener low pass-filter to a grayscale image that has been degraded by constant power additive noise, based on statistics estimated from a local neighborhood of each pixel. Performs Median filter of the input noisy image, each output pixel contains the Median value in the M-by-N neighborhood around the corresponding pixel in the input image, Gaussian low pass-filter and Order-statistic filter also be used. Experimental results shows LPG-PCA method
... Show More