This paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one pixel-wide lines. Finally, the Fusion technique was used to merge the results of the Histogram Equalization process with the Skeletonization process to obtain the new high contrast images. The proposed method was tested in different quality images from National Institute of Standard and Technology (NIST) special database 14. The experimental results are very encouraging and the current enhancement method appeared to be effective by improving different quality images.
Today, the use of iris recognition is expanding globally as the most accurate and reliable biometric feature in terms of uniqueness and robustness. The motivation for the reduction or compression of the large databases of iris images becomes an urgent requirement. In general, image compression is the process to remove the insignificant or redundant information from the image details, that implicitly makes efficient use of redundancy embedded within the image itself. In addition, it may exploit human vision or perception limitations to reduce the imperceptible information.
This paper deals with reducing the size of image, namely reducing the number of bits required in representing the
Huge number of medical images are generated and needs for more storage capacity and bandwidth for transferring over the networks. Hybrid DWT-DCT compression algorithm is applied to compress the medical images by exploiting the features of both techniques. Discrete Wavelet Transform (DWT) coding is applied to image YCbCr color model which decompose image bands into four subbands (LL, HL, LH and HH). The LL subband is transformed into low and high frequency components using Discrete Cosine Transform (DCT) to be quantize by scalar quantization that was applied on all image bands, the quantization parameters where reduced by half for the luminance band while it is the same for the chrominance bands to preserve the image quality, the zig
... Show MoreIn this paper, a simple medical image compression technique is proposed, that based on utilizing the residual of autoregressive model (AR) along with bit-plane slicing (BPS) to exploit the spatial redundancy efficiently. The results showed that the compression performance of the proposed techniques is improved about twice on average compared to the traditional autoregressive, along with preserving the image quality due to considering the significant layers only of high image contribution effects.
With the rapid development of smart devices, people's lives have become easier, especially for visually disabled or special-needs people. The new achievements in the fields of machine learning and deep learning let people identify and recognise the surrounding environment. In this study, the efficiency and high performance of deep learning architecture are used to build an image classification system in both indoor and outdoor environments. The proposed methodology starts with collecting two datasets (indoor and outdoor) from different separate datasets. In the second step, the collected dataset is split into training, validation, and test sets. The pre-trained GoogleNet and MobileNet-V2 models are trained using the indoor and outdoor se
... Show MoreIn this work , the effect of chlorinated rubber (additive I), zeolite 3A with chlorinated rubber (additive II), zeolite 4A with chlorinated rubber (additiveIII), and zeolite 5A with chlorinated rubber (additive IV), on flammability for epoxy resin studied, in the weight ratios of (2, 4, 7,10 & 12%) by preparing films of (130x130x3) mm in diameters, three standard test methods used to measure the flame retardation which are ; ASTM : D-2863 , ASTM : D-635 & ASTM : D-3014. Results obtained from these tests indicated that all of them are effective and the additive IV has the highest efficiency as a flame retardant.
In this search, a new pyrophosphate technique was proved. The technique was employed to single- nucleotide polymorphisms (SNPs), which diagnosis using a one-base extension reaction. Three Mycobacterium tuberculosis genes were chosen (Rpob, InhA, KatG) genes. Fifty-four specimens were used in this study fifty-three proved as drug-resistant specimens by The Iraqi Institute of Chest and Respiratory Diseases in Baghdad.; also one specimen was used as a negative control. The steps of this technique were by used a specific primer within each aliquot that has a short 3-OH end of the base of the target gene that was hybridized to the single-stranded DNA template. Then, the Taq polymerase enzyme and one of either α-thio-dATP, dTTP, dGTP, or dCTP
... Show MoreThis article investigates how an appropriate chaotic map (Logistic, Tent, Henon, Sine...) should be selected taking into consideration its advantages and disadvantages in regard to a picture encipherment. Does the selection of an appropriate map depend on the image properties? The proposed system shows relevant properties of the image influence in the evaluation process of the selected chaotic map. The first chapter discusses the main principles of chaos theory, its applicability to image encryption including various sorts of chaotic maps and their math. Also this research explores the factors that determine security and efficiency of such a map. Hence the approach presents practical standpoint to the extent that certain chaos maps will bec
... Show MoreIn this paper, several combination algorithms between Partial Update LMS (PU LMS) methods and previously proposed algorithm (New Variable Length LMS (NVLLMS)) have been developed. Then, the new sets of proposed algorithms were applied to an Acoustic Echo Cancellation system (AEC) in order to decrease the filter coefficients, decrease the convergence time, and enhance its performance in terms of Mean Square Error (MSE) and Echo Return Loss Enhancement (ERLE). These proposed algorithms will use the Echo Return Loss Enhancement (ERLE) to control the operation of filter's coefficient length variation. In addition, the time-varying step size is used.The total number of coefficients required was reduced by about 18% , 10% , 6%
... Show More