Artificial intelligence (AI) is entering many fields of life nowadays. One of these fields is biometric authentication. Palm print recognition is considered a fundamental aspect of biometric identification systems due to the inherent stability, reliability, and uniqueness of palm print features, coupled with their non-invasive nature. In this paper, we develop an approach to identify individuals from palm print image recognition using Orange software in which a hybrid of AI methods: Deep Learning (DL) and traditional Machine Learning (ML) methods are used to enhance the overall performance metrics. The system comprises of three stages: pre-processing, feature extraction, and feature classification or matching. The SqueezeNet deep learning model was utilized to resize images and feature extraction. Finally, different ML classifiers have been tested for recognition based on the extracted features. The effectiveness of each classifier was assessed using various performance metrics. The results show that the proposed system works well, and all the methods achieved good results; however, the best results obtained were for the Support Vector Machine (SVM) with a linear kernel.
NeighShrink is an efficient image denoising algorithm based on the discrete wavelet
transform (DWT). Its disadvantage is to use a suboptimal universal threshold and identical
neighbouring window size in all wavelet subbands. Dengwen and Wengang proposed an
improved method, which can determine an optimal threshold and neighbouring window size
for every subband by the Stein’s unbiased risk estimate (SURE). Its denoising performance is
considerably superior to NeighShrink and also outperforms SURE-LET, which is an up-todate
denoising algorithm based on the SURE. In this paper different wavelet transform
families are used with this improved method, the results show that Haar wavelet has the
lowest performance among
This research is a result of other studies made about the iraqi public and its relationship with different states institutions, until recently, such studies were almost non-existent. The main characteristic that distinguishes scientific research is that it involves a specific problem that needs to be studied and analysed from multiple aspects. What is meant by identifying the problem, is to limit the topic to what the researcher wants to deal with, rather than what the title suggests as topics which the researcher doesn’t want to deal with. The problem of this research is the absence of thoughtful and planned scientific programs to build a positive mental image of the institutions of the modern state in general and the House of Represe
... Show MoreThis study aims to deliver the woman’s image and to unveil on how to be introduced in the TV series. The research is based on the survey method-using content analysis tool. The research sample represented in the TV series produced by the IMN, which were displayed in 2014 and used the pattern of Margaret Gallagher to analyze the content of the series in accordance with the frame analysis theory.
The study came up with declination of the woman’s representation compared with man in Iraqi TV drama, also the study finds that the series introduced the woman according to the personal, social, political, and economic frames in a standardizing method. It focuses on the characteristics always attributed to it as showing her obedient of the
Background and Aim: due to the rapid growth of data communication and multimedia system applications, security becomes a critical issue in the communication and storage of images. This study aims to improve encryption and decryption for various types of images by decreasing time consumption and strengthening security. Methodology: An algorithm is proposed for encrypting images based on the Carlisle Adams and Stafford Tavares CAST block cipher algorithm with 3D and 2D logistic maps. A chaotic function that increases the randomness in the encrypted data and images, thereby breaking the relation sequence through the encryption procedure, is introduced. The time is decreased by using three secure and private S-Boxes rather than using si
... Show MoreThe novels that we have addressed in the research, Including those with the ideological and political ideology, It's carry a negative image for the Kurds without any attempt to understand, empathy and the separation between politics and the people, The novels were deformation of the image, Like tongue of the former authority which speaks their ideas, Such as (Freedom heads bagged, Happy sorrows Tuesdays for Jassim Alrassif, and Under the dogs skies for Salah Salah). The rest of novels (Life is a moment for Salam Ibrahim, The country night for Jassim Halawi, The rib for Hameed Aleqabi). These are novels contained a scene carries a negative image among many other social images, some positive, and can be described as neutral novels. We can
... Show MoreA common approach to the color image compression was started by transform
the red, green, and blue or (RGB) color model to a desire color model, then applying
compression techniques, and finally retransform the results into RGB model In this
paper, a new color image compression method based on multilevel block truncation
coding (MBTC) and vector quantization is presented. By exploiting human visual
system response for color, bit allocation process is implemented to distribute the bits
for encoding in more effective away.
To improve the performance efficiency of vector quantization (VQ),
modifications have been implemented. To combines the simple computational and
edge preservation properties of MBTC with high c
Cryptography can be thought of as a toolbox, where potential attackers gain access to various computing resources and technologies to try to compute key values. In modern cryptography, the strength of the encryption algorithm is only determined by the size of the key. Therefore, our goal is to create a strong key value that has a minimum bit length that will be useful in light encryption. Using elliptic curve cryptography (ECC) with Rubik's cube and image density, the image colors are combined and distorted, and by using the Chaotic Logistics Map and Image Density with a secret key, the Rubik's cubes for the image are encrypted, obtaining a secure image against attacks. ECC itself is a powerful algorithm that generates a pair of p
... Show MoreThis research including lineament automated extraction by using PCI Geomatica program, depending on satellite image and lineament analysis by using GIS program. Analysis included density analysis, length density analysis and intersection density analysis. When calculate the slope map for the study area, found the relationship between the slope and lineament density.
The lineament density increases in the regions that have high values for the slope, show that lineament play an important role in the classification process as it isolates the class for the other were observed in Iranian territory, clearly, also show that one of the lineament hit shoulders of Galal Badra dam and the surrounding areas dam. So should take into consideration
Image compression is one of the data compression types applied to digital images in order to reduce their high cost for storage and/or transmission. Image compression algorithms may take the benefit of visual sensitivity and statistical properties of image data to deliver superior results in comparison with generic data compression schemes, which are used for other digital data. In the first approach, the input image is divided into blocks, each of which is 16 x 16, 32 x 32, or 64 x 64 pixels. The blocks are converted first into a string; then, encoded by using a lossless and dictionary-based algorithm known as arithmetic coding. The more occurrence of the pixels values is codded in few bits compare with pixel values of less occurre
... Show MoreGroupwise non-rigid image alignment is a difficult non-linear optimization problem involving many parameters and often large datasets. Previous methods have explored various metrics and optimization strategies. Good results have been previously achieved with simple metrics, requiring complex optimization, often with many unintuitive parameters that require careful tuning for each dataset. In this chapter, the problem is restructured to use a simpler, iterative optimization algorithm, with very few free parameters. The warps are refined using an iterative Levenberg-Marquardt minimization to the mean, based on updating the locations of a small number of points and incorporating a stiffness constraint. This optimization approach is eff
... Show More