In information security, fingerprint verification is one of the most common recent approaches for verifying human identity through a distinctive pattern. The verification process works by comparing a pair of fingerprint templates and identifying the similarity/matching among them. Several research studies have utilized different techniques for the matching process such as fuzzy vault and image filtering approaches. Yet, these approaches are still suffering from the imprecise articulation of the biometrics’ interesting patterns. The emergence of deep learning architectures such as the Convolutional Neural Network (CNN) has been extensively used for image processing and object detection tasks and showed an outstanding performance compared to traditional image filtering techniques. This paper aimed to utilize a specific CNN architecture known as AlexNet for the fingerprint-matching task. Using such an architecture, this study has extracted the significant features of the fingerprint image, generated a key based on such a biometric feature of the image, and stored it in a reference database. Then, using Cosine similarity and Hamming Distance measures, the testing fingerprints have been matched with a reference. Using the FVC2002 database, the proposed method showed a False Acceptance Rate (FAR) of 2.09% and a False Rejection Rate (FRR) of 2.81%. Comparing these results against other studies that utilized traditional approaches such as the Fuzzy Vault has demonstrated the efficacy of CNN in terms of fingerprint matching. It is also emphasizing the usefulness of using Cosine similarity and Hamming Distance in terms of matching.
As a result of the significance of image compression in reducing the volume of data, the requirement for this compression permanently necessary; therefore, will be transferred more quickly using the communication channels and kept in less space in memory. In this study, an efficient compression system is suggested; it depends on using transform coding (Discrete Cosine Transform or bi-orthogonal (tap-9/7) wavelet transform) and LZW compression technique. The suggested scheme was applied to color and gray models then the transform coding is applied to decompose each color and gray sub-band individually. The quantization process is performed followed by LZW coding to compress the images. The suggested system was applied on a set of seven stand
... Show MoreThe objective of this research is to know the economic feasibility of hydroponics technology by estimating the expected demand for green forage for the years 2021-2031 as well as Identify and analyze project data and information in a way that helps the investor make the appropriate investment decision in addition to preparing a detailed technical preliminary study for the cultivar barley project focusing on the commercial and financing aspects and the criteria that take into account the risks and uncertainties . that indicating the economic feasibility of the project to produce green forage using hydroponics technology. Cultured barley as a product falls within the blue ocean strategy. Accordingly, the research recommends the necess
... Show MoreImage classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class
... Show MoreObjectives: To determine the (QoL) for patients with permanent pacemaker and to find-out the relationship between
these patients’ (QoL) and their sociodemographic characteristics such as age, gender, level of education, and
occupation.
Methodology: ٨ purposive non-probability” sample of (62) patient with permanent pacemaker was involved in this
study. The developed questionnaire consists of (4) parts which include !.demographic data form, 2.disease-related
information form, 3.socioeconomic data form, and 4.Permanent pacemaker patient’s quality of life questionnaire data
form. The validity and reliability of the questionnaire were determined through the application of a pilot study. ٨
descriptive statistical a
The analysis of survival and reliability considered of topics and methods of vital statistics at the present time because of their importance in the various demographical, medical, industrial and engineering fields. This research focused generate random data for samples from the probability distribution Generalized Gamma: GG, known as: "Inverse Transformation" Method: ITM, which includes the distribution cycle integration function incomplete Gamma integration making it more difficult classical estimation so will be the need to illustration to the method of numerical approximation and then appreciation of the function of survival function. It was estimated survival function by simulation the way "Monte Carlo". The Entropy method used for the
... Show MoreA mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the
... Show MoreEvolutionary algorithms (EAs), as global search methods, are proved to be more robust than their counterpart local heuristics for detecting protein complexes in protein-protein interaction (PPI) networks. Typically, the source of robustness of these EAs comes from their components and parameters. These components are solution representation, selection, crossover, and mutation. Unfortunately, almost all EA based complex detection methods suggested in the literature were designed with only canonical or traditional components. Further, topological structure of the protein network is the main information that is used in the design of almost all such components. The main contribution of this paper is to formulate a more robust E
... Show MoreThis paper presents a research for magnetohydrodynamic (MHD) flow of an incompressible generalized Burgers’ fluid including by an accelerating plate and flowing under the action of pressure gradient. Where the no – slip assumption between the wall and the fluid is no longer valid. The fractional calculus approach is introduced to establish the constitutive relationship of the generalized Burgers’ fluid. By using the discrete Laplace transform of the sequential fractional derivatives, a closed form solutions for the velocity and shear stress are obtained in terms of Fox H- function for the following two problems: (i) flow due to a constant pressure gradient, and (ii) flow due to due to a sinusoidal pressure gradient. The solutions for
... Show MoreMPEG-DASH is an adaptive bitrate streaming technology that divides video content into small HTTP-objects file segments with different bitrates. With live UHD video streaming latency is the most important problem. In this paper, creating a low-delay streaming system using HTTP 2.0. Based on the network condition the proposed system adaptively determine the bitrate of segments. The video is coded using a layered H.265/HEVC compression standard, then is tested to investigate the relationship between video quality and bitrate for various HEVC parameters and video motion at each layer/resolution. The system architecture includes encoder/decoder configurations and how to embedded the adaptive video streaming. The encoder includes compression besi
... Show MoreObjective(s): To determine the impact of Chemotherapy upon the quality of life for patients with chronic myeloid
leukemia in Baghdad city.
Methodology: A descriptive study design was carried out The study was initiated from 30 January 2011 to October
2011.A purposive (non–probability) sample consisted of (130) patients with a chronic myeloid leukemia ,Who
attended to Baghdad Teaching Hospital and National Center for Research and Treatment of Hematology. The
sample criteria was the patients who were 18 years old and above, excluding the patients who suffered from
psychological problems and other chronic illnesses .A questionnaire was adopted and developed from European
Organization Research and treatment of Can