In information security, fingerprint verification is one of the most common recent approaches for verifying human identity through a distinctive pattern. The verification process works by comparing a pair of fingerprint templates and identifying the similarity/matching among them. Several research studies have utilized different techniques for the matching process such as fuzzy vault and image filtering approaches. Yet, these approaches are still suffering from the imprecise articulation of the biometrics’ interesting patterns. The emergence of deep learning architectures such as the Convolutional Neural Network (CNN) has been extensively used for image processing and object detection tasks and showed an outstanding performance compared to traditional image filtering techniques. This paper aimed to utilize a specific CNN architecture known as AlexNet for the fingerprint-matching task. Using such an architecture, this study has extracted the significant features of the fingerprint image, generated a key based on such a biometric feature of the image, and stored it in a reference database. Then, using Cosine similarity and Hamming Distance measures, the testing fingerprints have been matched with a reference. Using the FVC2002 database, the proposed method showed a False Acceptance Rate (FAR) of 2.09% and a False Rejection Rate (FRR) of 2.81%. Comparing these results against other studies that utilized traditional approaches such as the Fuzzy Vault has demonstrated the efficacy of CNN in terms of fingerprint matching. It is also emphasizing the usefulness of using Cosine similarity and Hamming Distance in terms of matching.
With the spread use of internet, especially the web of social media, an unusual quantity of information is found that includes a number of study fields such as psychology, entertainment, sociology, business, news, politics, and other cultural fields of nations. Data mining methodologies that deal with social media allows producing enjoyable scene on the human behaviour and interaction. This paper demonstrates the application and precision of sentiment analysis using traditional feedforward and two of recurrent neural networks (gated recurrent unit (GRU) and long short term memory (LSTM)) to find the differences between them. In order to test the system’s performance, a set of tests is applied on two public datasets. The firs
... Show MoreThe Next-generation networks, such as 5G and 6G, need capacity and requirements for low latency, and high dependability. According to experts, one of the most important features of (5 and 6) G networks is network slicing. To enhance the Quality of Service (QoS), network operators may now operate many instances on the same infrastructure due to configuring able slicing QoS. Each virtualized network resource, such as connection bandwidth, buffer size, and computing functions, may have a varied number of virtualized network resources. Because network resources are limited, virtual resources of the slices must be carefully coordinated to meet the different QoS requirements of users and services. These networks may be modifie
... Show MoreThe aim of this paper is to design feed forward neural network to determine the effects of
cold pills and cascades from simulation the problem to system of first order initial value
problem. This problem is typical of the many models of the passage of medication throughout
the body. Designer model is an important part of the process by which dosage levels are set.
A critical factor is the need to keep the levels of medication high enough to be effective, but
not so high that they are dangerous.
Ad-Hoc Networks are a generation of networks that are truly wireless, and can be easily constructed without any operator. There are protocols for management of these networks, in which the effectiveness and the important elements in these networks are the Quality of Service (QoS). In this work the evaluation of QoS performance of MANETs is done by comparing the results of using AODV, DSR, OLSR and TORA routing protocols using the Op-Net Modeler, then conduct an extensive set of performance experiments for these protocols with a wide variety of settings. The results show that the best protocol depends on QoS using two types of applications (+ve and –ve QoS in the FIS evaluation). QoS of the protocol varies from one prot
... Show MoreSome researchers are interested in using the flexible and applicable properties of quadratic functions as activation functions for FNNs. We study the essential approximation rate of any Lebesgue-integrable monotone function by a neural network of quadratic activation functions. The simultaneous degree of essential approximation is also studied. Both estimates are proved to be within the second order of modulus of smoothness.
Wisconsin Breast Cancer Dataset (WBCD) was employed to show the performance of the Adaptive Resonance Theory (ART), specifically the supervised ART-I Artificial Neural Network (ANN), to build a breast cancer diagnosis smart system. It was fed with different learning parameters and sets. The best result was achieved when the model was trained with 50% of the data and tested with the remaining 50%. Classification accuracy was compared to other artificial intelligence algorithms, which included fuzzy classifier, MLP-ANN, and SVM. We achieved the highest accuracy with such low learning/testing ratio.
In this paper, we deal with the problem of general matching of two images one of them has experienced geometrical transformations, to find the correspondence between two images. We develop the invariant moments for traditional techniques (moments of inertia) with new approach to enhance the performance for these methods. We test various projections directional moments, to extract the difference between Block Distance Moment (BDM) and evaluate their reliability. Three adaptive strategies are shown for projections directional moments, that are raster (vertical and horizontal) projection, Fan-Bean projection and new projection procedure that is the square projection method. Our paper started with the description of a new algorithm that is low
... Show MoreThe calculation of the oil density is more complex due to a wide range of pressuresand temperatures, which are always determined by specific conditions, pressure andtemperature. Therefore, the calculations that depend on oil components are moreaccurate and easier in finding such kind of requirements. The analyses of twenty liveoil samples are utilized. The three parameters Peng Robinson equation of state istuned to get match between measured and calculated oil viscosity. The Lohrenz-Bray-Clark (LBC) viscosity calculation technique is adopted to calculate the viscosity of oilfrom the given composition, pressure and temperature for 20 samples. The tunedequation of state is used to generate oil viscosity values for a range of temperatu
... Show MoreThe finishing operation of the electrochemical finishing technology (ECF) for tube of steel was investigated In this study. Experimental procedures included qualitative
and quantitative analyses for surface roughness and material removal. Qualitative analyses utilized finishing optimization of a specific specimen in various design and operating conditions; value of gap from 0.2 to 10mm, flow rate of electrolytes from 5 to 15liter/min, finishing time from 1 to 4min and the applied voltage from 6 to 12v, to find out the value of surface roughness and material removal at each electrochemical state. From the measured material removal for each process state was used to verify the relationship with finishing time of work piece. Electrochemi