Artificial intelligence (AI) is entering many fields of life nowadays. One of these fields is biometric authentication. Palm print recognition is considered a fundamental aspect of biometric identification systems due to the inherent stability, reliability, and uniqueness of palm print features, coupled with their non-invasive nature. In this paper, we develop an approach to identify individuals from palm print image recognition using Orange software in which a hybrid of AI methods: Deep Learning (DL) and traditional Machine Learning (ML) methods are used to enhance the overall performance metrics. The system comprises of three stages: pre-processing, feature extraction, and feature classification or matching. The SqueezeNet deep learning model was utilized to resize images and feature extraction. Finally, different ML classifiers have been tested for recognition based on the extracted features. The effectiveness of each classifier was assessed using various performance metrics. The results show that the proposed system works well, and all the methods achieved good results; however, the best results obtained were for the Support Vector Machine (SVM) with a linear kernel.
Machine learning (ML) is a key component within the broader field of artificial intelligence (AI) that employs statistical methods to empower computers with the ability to learn and make decisions autonomously, without the need for explicit programming. It is founded on the concept that computers can acquire knowledge from data, identify patterns, and draw conclusions with minimal human intervention. The main categories of ML include supervised learning, unsupervised learning, semisupervised learning, and reinforcement learning. Supervised learning involves training models using labelled datasets and comprises two primary forms: classification and regression. Regression is used for continuous output, while classification is employed
... Show MoreInterface bonding between asphalt layers has been a topic of international investigation over the last thirty years. In this condition, a number of researchers have made their own techniques and used them to examine the characteristics of pavement interfaces. It is obvious that test findings won't always be comparable to the lack of a globally standard methodology for interface bonding. Also, several kinds of research have shown that factors like temperature, loading conditions, materials, and others have an impact on surface qualities. This study aims to solve this problem by thoroughly investigating interface bond testing that might serve as a basis for a uniform strategy. First, a general explanation of how the bonding strength
... Show MoreInterface bonding between asphalt layers has been a topic of international investigation over the last thirty years. In this condition, a number of researchers have made their own techniques and used them to examine the characteristics of pavement interfaces. It is obvious that test findings won't always be comparable to the lack of a globally standard methodology for interface bonding. Also, several kinds of research have shown that factors like temperature, loading conditions, materials, and others have an impact on surface qualities. This study aims to solve this problem by thoroughly investigating interface bond testing that might serve as a basis for a uniform strategy. First, a general explanation of how
... Show MoreSignificant advances in the automated glaucoma detection techniques have been made through the employment of the Machine Learning (ML) and Deep Learning (DL) methods, an overview of which will be provided in this paper. What sets the current literature review apart is its exclusive focus on the aforementioned techniques for glaucoma detection using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines for filtering the selected papers. To achieve this, an advanced search was conducted in the Scopus database, specifically looking for research papers published in 2023, with the keywords "glaucoma detection", "machine learning", and "deep learning". Among the multiple found papers, the ones focusing
... Show MoreIn this paper, visible image watermarking algorithm based on biorthogonal wavelet
transform is proposed. The watermark (logo) of type binary image can be embedded in the
host gray image by using coefficients bands of the transformed host image by biorthogonal
transform domain. The logo image can be embedded in the top-left corner or spread over the
whole host image. A scaling value (α) in the frequency domain is introduced to control the
perception of the watermarked image. Experimental results show that this watermark
algorithm gives visible logo with and no losses in the recovery process of the original image,
the calculated PSNR values support that. Good robustness against attempt to remove the
watermark was s
WA Shukur, FA Abdullatif, Ibn Al-Haitham Journal For Pure and Applied Sciences, 2011 With wide spread of internet, and increase the price of information, steganography become very important to communication. Over many years used different types of digital cover to hide information as a cover channel, image from important digital cover used in steganography because widely use in internet without suspicious.
The basic solution to overcome difficult issues related to huge size of digital images is to recruited image compression techniques to reduce images size for efficient storage and fast transmission. In this paper, a new scheme of pixel base technique is proposed for grayscale image compression that implicitly utilize hybrid techniques of spatial modelling base technique of minimum residual along with transformed technique of Discrete Wavelet Transform (DWT) that also impels mixed between lossless and lossy techniques to ensure highly performance in terms of compression ratio and quality. The proposed technique has been applied on a set of standard test images and the results obtained are significantly encourage compared with Joint P
... Show MoreJPEG is most popular image compression and encoding, this technique is widely used in many applications (images, videos and 3D animations). Meanwhile, researchers are very interested to develop this massive technique to compress images at higher compression ratios with keeping image quality as much as possible. For this reason in this paper we introduce a developed JPEG based on fast DCT and removed most of zeros and keeps their positions in a transformed block. Additionally, arithmetic coding applied rather than Huffman coding. The results showed up, the proposed developed JPEG algorithm has better image quality than traditional JPEG techniques.