Iris recognition occupies an important rank among the biometric types of approaches as a result of its accuracy and efficiency. The aim of this paper is to suggest a developed system for iris identification based on the fusion of scale invariant feature transforms (SIFT) along with local binary patterns of features extraction. Several steps have been applied. Firstly, any image type was converted to grayscale. Secondly, localization of the iris was achieved using circular Hough transform. Thirdly, the normalization to convert the polar value to Cartesian using Daugman’s rubber sheet models, followed by histogram equalization to enhance the iris region. Finally, the features were extracted by utilizing the scale invariant feature transformation and local binary pattern. Some sigma and threshold values were used for feature extraction, which achieved the highest rate of recognition. The programming was implemented by using MATLAB 2013. The matching was performed by applying the city block distance. The iris recognition system was built with the use of iris images for 30 individuals in the CASIA v4. 0 database. Every individual has 20 captures for left and right, with a total of 600 pictures. The main findings showed that the values of recognition rates in the proposed system are 98.67% for left eyes and 96.66% for right eyes, among thirty subjects.
Voting is an important procedure in democratic societies in different countries, including Iraq. Electronic voting (E-voting) is becoming more prevalent due to reducing administrative costs and burdens. E-voting systems have many restrictions that affect the electoral process. For example, fraud, tampering with ballot boxes, taking many hours to announce results, and the difficulty of reaching polling stations. Over the last decade, blockchain and smart contract technologies have gained widespread adoption in various sectors, such as cryptocurrencies, finance, banking, and most notably in e-voting systems. If utilized properly, the developer demonstrates properties that are promising for their properties, such as security, privacy, trans
... Show MoreImage compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless com
... Show MoreA new modified differential evolution algorithm DE-BEA, is proposed to improve the reliability of the standard DE/current-to-rand/1/bin by implementing a new mutation scheme inspired by the bacterial evolutionary algorithm (BEA). The crossover and the selection schemes of the DE method are also modified to fit the new DE-BEA mechanism. The new scheme diversifies the population by applying to all the individuals a segment based scheme that generates multiple copies (clones) from each individual one-by-one and applies the BEA segment-wise mechanism. These new steps are embedded in the DE/current-to-rand/bin scheme. The performance of the new algorithm has been compared with several DE variants over eighteen benchmark functions including sever
... Show MoreThe synthesis of new substituted cobalt Phthalocyanine (CoPc) was carried out using starting materials Naphthalene-1,4,5, tetracarbonic acid dianhydride (NDI) employing dry process method. Metal oxides (MO) alloy of (60%Ni3O4 40%-Co3O4 ) have been functionalized with multiwall carbon nanotubes (F-MWCNTs) to produce (F-MWCNTs/MO) nanocomposite (E2) and mixed with CoPc to yield (F-MWCNT/CoPc/MO) (E3). These composites were investigated using different analytical and spectrophotometric methods such as 1H-NMR (0-18 ppm), FTIR spectroscopy in the range of (400-4000cm-1), powder X-rays diffraction (PXRD, 2θ o = 10-80), Raman spectroscopy (0-4000 cm-1), and UV-Visib
... Show MoreImage contrast enhancement methods have been a topic of interest in digital image processing for various applications like satellite imaging, recognition, medical imaging, and stereo vision. This paper studies the technique for image enhancement utilizing Adaptive Histogram Equalization and Weighted Gamma Correction to cater radiometric condition and illumination variations of stereo image pairs. In the proposed method, the stereo pair images are segmented together with weighted distribution into sub-histograms supported with Histogram Equalization (HE) mapping or gamma correction and guided filtering. The experimental result shows the experimented techniques outperform compare with the original image in ev
... Show MoreA three-dimensional (3D) model extraction represents the best way to reflect the reality in all details. This explains the trends and tendency of many scientific disciplines towards making measurements, calculations and monitoring in various fields using such model. Although there are many ways to produce the 3D model like as images, integration techniques, and laser scanning, however, the quality of their products is not the same in terms of accuracy and detail. This article aims to assess the 3D point clouds model accuracy results from close range images and laser scan data based on Agi soft photoscan and cloud compare software to determine the compatibility of both datasets for several applications. College of Scien
... Show MoreThis paper presents an improved technique on Ant Colony Optimization (ACO) algorithm. The procedure is applied on Single Machine with Infinite Bus (SMIB) system with power system stabilizer (PSS) at three different loading regimes. The simulations are made by using MATLAB software. The results show that by using Improved Ant Colony Optimization (IACO) the system will give better performance with less number of iterations as it compared with a previous modification on ACO. In addition, the probability of selecting the arc depends on the best ant performance and the evaporation rate.
Currently, the prominence of automatic multi document summarization task belongs to the information rapid increasing on the Internet. Automatic document summarization technology is progressing and may offer a solution to the problem of information overload.
Automatic text summarization system has the challenge of producing a high quality summary. In this study, the design of generic text summarization model based on sentence extraction has been redirected into a more semantic measure reflecting individually the two significant objectives: content coverage and diversity when generating summaries from multiple documents as an explicit optimization model. The proposed two models have been then coupled and def
... Show More