Recently, biometric technologies are used widely due to their improved security that decreases cases of deception and theft. The biometric technologies use physical features and characters in the identification of individuals. The most common biometric technologies are: Iris, voice, fingerprint, handwriting and hand print. In this paper, two biometric recognition technologies are analyzed and compared, which are the iris and sound recognition techniques. The iris recognition technique recognizes persons by analyzing the main patterns in the iris structure, while the sound recognition technique identifies individuals depending on their unique voice characteristics or as called voice print. The comparison results show that the resultant average accuracies of the iris technique and voice technique are 99.83% and 98%, respectively. Thus, the iris recognition technique provides higher accuracy and security than the voice recognition technique
This paper proposes a new methodology for improving network security by introducing an optimised hybrid intrusion detection system (IDS) framework solution as a middle layer between the end devices. It considers the difficulty of updating databases to uncover new threats that plague firewalls and detection systems, in addition to big data challenges. The proposed framework introduces a supervised network IDS based on a deep learning technique of convolutional neural networks (CNN) using the UNSW-NB15 dataset. It implements recursive feature elimination (RFE) with extreme gradient boosting (XGB) to reduce resource and time consumption. Additionally, it reduces bias toward
... Show MoreIn the present work, an image compression method have been modified by combining The Absolute Moment Block Truncation Coding algorithm (AMBTC) with a VQ-based image coding. At the beginning, the AMBTC algorithm based on Weber's law condition have been used to distinguish low and high detail blocks in the original image. The coder will transmit only mean of low detailed block (i.e. uniform blocks like background) on the channel instate of transmit the two reconstruction mean values and bit map for this block. While the high detail block is coded by the proposed fast encoding algorithm for vector quantized method based on the Triangular Inequality Theorem (TIE), then the coder will transmit the two reconstruction mean values (i.e. H&L)
... Show MoreA signature is a special identifier that confirms a person's identity and distinguishes him or her from others. The main goal of this paper is to present a deep study of the spatial density distribution method and the effect of a mass-based segmentation algorithm on its performance while it is being used to recognize handwritten signatures in an offline mode. The methodology of the algorithm is based on dividing the image of the signature into tiles that reflect the shape and geometry of the signature, and then extracting five spatial features from each of these tiles. Features include the mass of each tile, the relative mean, and the relative standard deviation for the vertical and horizontal projections of that tile. In the clas
... Show MoreThis research aims to:
1 – Make a proposed module for (aesthetics) for the second stage - Department of Art Education under education theories.
2 - Verification from the effect of the proposed module on student achievement and motivation towards learning aesthetics material.
To verification the second goal we wording these two hypotheses:
1- There are no individual differences with statistically significant at level (0.05) between the student's scores average. (Experimental group ) who studied according to the proposed module and the average student's scores (control group) who studied in the usual way for the achievement test for the Aesthetics material.
2- There are no individual differences with statistically signifi
The acceptance sampling plans for generalized exponential distribution, when life time experiment is truncated at a pre-determined time are provided in this article. The two parameters (α, λ), (Scale parameters and Shape parameters) are estimated by LSE, WLSE and the Best Estimator’s for various samples sizes are used to find the ratio of true mean time to a pre-determined, and are used to find the smallest possible sample size required to ensure the producer’s risks, with a pre-fixed probability (1 - P*). The result of estimations and of sampling plans is provided in tables.
Key words: Generalized Exponential Distribution, Acceptance Sampling Plan, and Consumer’s and Producer Risks
... Show More