Artificial intelligence (AI) is entering many fields of life nowadays. One of these fields is biometric authentication. Palm print recognition is considered a fundamental aspect of biometric identification systems due to the inherent stability, reliability, and uniqueness of palm print features, coupled with their non-invasive nature. In this paper, we develop an approach to identify individuals from palm print image recognition using Orange software in which a hybrid of AI methods: Deep Learning (DL) and traditional Machine Learning (ML) methods are used to enhance the overall performance metrics. The system comprises of three stages: pre-processing, feature extraction, and feature classification or matching. The SqueezeNet deep learning model was utilized to resize images and feature extraction. Finally, different ML classifiers have been tested for recognition based on the extracted features. The effectiveness of each classifier was assessed using various performance metrics. The results show that the proposed system works well, and all the methods achieved good results; however, the best results obtained were for the Support Vector Machine (SVM) with a linear kernel.
Autoimmunity is a philosophical term that enhances the fields of life-sciences, and links out to the unnatural behaviour of an individual. It is caused by the defenses of an organism that deceive its own tissues. Obviously, the immune system should protect the body against invading cells with types of white blood cells called antibodies. Nevertheless, when an autoimmune disease attacks, it causes perilous actions like suicide. Psychologically, Jacques Derrida (1930-2004) calls autoimmunity a double suicide, because it harms the self and the other. In this case, the organ disarms betraying cells, as the immune system cannot provide protection. From a literary perspective, Derrida has called autoimmunity as deconstruction for over forty years
... Show MoreNowadays, still images are used everywhere in the digital world. The shortages of storage capacity and transmission bandwidth make efficient compression solutions essential. A revolutionary mathematics tool, wavelet transform, has already shown its power in image processing. The major topic of this paper, is improve the compresses of still images by Multiwavelet based on estimation the high Multiwavelet coefficients in high frequencies sub band by interpolation instead of sending all Multiwavelet coefficients. When comparing the proposed approach with other compression methods Good result obtained
The tasseled cap transformation (TCT) is a useful tool for compressing spectral data into a few bands associated with physical scene characteristics with minimal information loss. TCT was originally evolved from the Landsat multi-spectral scanner (MSS) launched in 1972 and is widely adapted to modern sensors. In this study, we derived the TCT coefficients for operational land imager (OLI) sensor on-board Landsat-8 acquired at 28 Sep.2013. A newly classification method is presented; the method is based on dividing the scatterplot between the Greenness and the Brightness of TCT into regions corresponding to their reflectance values. The results from this paper suggest that the TCT coefficient derived from the OLI bands at September is the
... Show MoreThe denoising of a natural image corrupted by Gaussian noise is a problem in signal or image processing. Much work has been done in the field of wavelet thresholding but most of it was focused on statistical modeling of wavelet coefficients and the optimal choice of thresholds. This paper describes a new method for the suppression of noise in image by fusing the stationary wavelet denoising technique with adaptive wiener filter. The wiener filter is applied to the reconstructed image for the approximation coefficients only, while the thresholding technique is applied to the details coefficients of the transform, then get the final denoised image is obtained by combining the two results. The proposed method was applied by usin
... Show MoreA new de-blurring technique was proposed in order to reduced or remove the blur in the images. The proposed filter was designed from the Lagrange interpolation calculation with adjusted by fuzzy rules and supported by wavelet decomposing technique. The proposed Wavelet Lagrange Fuzzy filter gives good results for fully and partially blurring region in images.
Text based-image clustering (TBIC) is an insufficient approach for clustering related web images. It is a challenging task to abstract the visual features of images with the support of textual information in a database. In content-based image clustering (CBIC), image data are clustered on the foundation of specific features like texture, colors, boundaries, shapes. In this paper, an effective CBIC) technique is presented, which uses texture and statistical features of the images. The statistical features or moments of colors (mean, skewness, standard deviation, kurtosis, and variance) are extracted from the images. These features are collected in a one dimension array, and then genetic algorithm (GA) is applied for image clustering.
... Show MoreThe recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital med
... Show MoreThe searching process using a binary codebook of combined Block Truncation Coding (BTC) method and Vector Quantization (VQ), i.e. a full codebook search for each input image vector to find the best matched code word in the codebook, requires a long time. Therefore, in this paper, after designing a small binary codebook, we adopted a new method by rotating each binary code word in this codebook into 900 to 2700 step 900 directions. Then, we systematized each code word depending on its angle to involve four types of binary code books (i.e. Pour when , Flat when , Vertical when, or Zigzag). The proposed scheme was used for decreasing the time of the coding procedure, with very small distortion per block, by designing s
... Show MoreThe principal goal guiding any designed encryption algorithm must be security against unauthorized attackers. Within the last decade, there has been a vast increase in the communication of digital computer data in both the private and public sectors. Much of this information has a significant value; therefore it does require the protection by design strength algorithm to cipher it. This algorithm defines the mathematical steps required to transform data into a cryptographic cipher and also to transform the cipher back to the original form. The Performance and security level is the main characteristics that differentiate one encryption algorithm from another. In this paper suggested a new technique to enhance the performance of the Data E
... Show MoreAlthough the Wiener filtering is the optimal tradeoff of inverse filtering and noise smoothing, in the case when the blurring filter is singular, the Wiener filtering actually amplify the noise. This suggests that a denoising step is needed to remove the amplified noise .Wavelet-based denoising scheme provides a natural technique for this purpose .
In this paper a new image restoration scheme is proposed, the scheme contains two separate steps : Fourier-domain inverse filtering and wavelet-domain image denoising. The first stage is Wiener filtering of the input image , the filtered image is inputted to adaptive threshold wavelet
... Show More