In recent years, the iris biometric occupies a wide interesting when talking about
biometric based systems, because it is one of the most accurate biometrics to prove
users identities, thus it is providing high security for concerned systems. This
research article is showing up an efficient method to detect the outer boundary of
the iris, using a new form of leading edge detection technique. This technique is
very useful to isolate two regions that have convergent intensity levels in gray scale
images, which represents the main issue of iris isolation, because it is difficult to
find the border that can separate between the lighter gray background (sclera) and
light gray foreground (iris texture). The proposed method tried to find iris radius by
seeking in the two iris halves (right and left) circularly, in term of certain angles
interval for each half, to avoid the existence of the upper and lower eyelids and
eyelashes. After the two radiuses (i.e. for each half) had been determined, the iris
final iris radius would be evaluated to the minimum value of them. This method
tested on all samples of CASIAv4-Interval dataset, which consist of 2639 samples,
captured from 249 individuals, and distributed on 395 classes, the accuracy of the
testing was 100% for outer boundary detection.
In computer vision, visual object tracking is a significant task for monitoring
applications. Tracking of object type is a matching trouble. In object tracking, one
main difficulty is to select features and build models which are convenient for
distinguishing and tracing the target. The suggested system for continuous features
descriptor and matching in video has three steps. Firstly, apply wavelet transform on
image using Haar filter. Secondly interest points were detected from wavelet image
using features from accelerated segment test (FAST) corner detection. Thirdly those
points were descripted using Speeded Up Robust Features (SURF). The algorithm
of Speeded Up Robust Features (SURF) has been employed and impl
Governmental establishments are maintaining historical data for job applicants for future analysis of predication, improvement of benefits, profits, and development of organizations and institutions. In e-government, a decision can be made about job seekers after mining in their information that will lead to a beneficial insight. This paper proposes the development and implementation of an applicant's appropriate job prediction system to suit his or her skills using web content classification algorithms (Logit Boost, j48, PART, Hoeffding Tree, Naive Bayes). Furthermore, the results of the classification algorithms are compared based on data sets called "job classification data" sets. Experimental results indicate
... Show MoreA proposed feature extraction algorithm for handwriting Arabic words. The proposed method uses a 4 levels discrete wavelet transform (DWT) on binary image. sliding window on wavelet space and computes the stander derivation for each window. The extracted features were classified with multiple Support Vector Machine (SVM) classifiers. The proposed method simulated with a proposed data set from different writers. The experimental results of the simulation show 94.44% recognition rate.
This paper presents a hybrid energy resources (HER) system consisting of solar PV, storage, and utility grid. It is a challenge in real time to extract maximum power point (MPP) from the PV solar under variations of the irradiance strength. This work addresses challenges in identifying global MPP, dynamic algorithm behavior, tracking speed, adaptability to changing conditions, and accuracy. Shallow Neural Networks using the deep learning NARMA-L2 controller have been proposed. It is modeled to predict the reference voltage under different irradiance. The dynamic PV solar and nonlinearity have been trained to track the maximum power drawn from the PV solar systems in real time.
Moreover, the proposed controller i
... Show MoreThis paper proposed a new method for network self-fault management (NSFM) based on two technologies: intelligent agent to automate fault management tasks, and Windows Management Instrumentations (WMI) to identify the fault faster when resources are independent (different type of devices). The proposed network self-fault management reduced the load of network traffic by reducing the request and response between the server and client, which achieves less downtime for each node in state of fault occurring in the client. The performance of the proposed system is measured by three measures: efficiency, availability, and reliability. A high efficiency average is obtained depending on the faults occurred in the system which reaches to
... Show MoreBecause of the rapid development and use of the Internet as a communication media emerged to need a high level of security during data transmission and one of these ways is "Steganography". This paper reviews the Least Signification Bit steganography used for embedding text file with related image in gray-scale image. As well as we discuss the bit plane which is divided into eight different images when combination them we get the actual image. The findings of the research was the stego-image is indistinguishable to the naked eye from the original cover image when the value of bit less than four Thus we get to the goal is to cover up the existence of a connection or hidden data. The Peak to Signal Noise Ratio(PSNR) and Mean Square Error (
... Show MoreThis paper describes a practical study on the impact of learning's partners, Bluetooth Broadcasting system, interactive board, Real – time response system, notepad, free internet access, computer based examination, and interaction classroom, etc, had on undergraduate student performance, achievement and involving with lectures. The goal of this study is to test the hypothesis that the use of such learning techniques, tools, and strategies to improve student learning especially among the poorest performing students. Also, it gives some kind of practical comparison between the traditional way and interactive way of learning in terms of lectures time, number of tests, types of tests, student's scores, and student's involving with lectures
... Show MoreObjective of this work is the mixing between human biometric characteristics and unique attributes of the computer in order to protect computer networks and resources environments through the development of authentication and authorization techniques. In human biometric side has been studying the best methods and algorithms used, and the conclusion is that the fingerprint is the best, but it has some flaws. Fingerprint algorithm has been improved so that their performance can be adapted to enhance the clarity of the edge of the gully structures of pictures fingerprint, taking into account the evaluation of the direction of the nearby edges and repeat. In the side of the computer features, computer and its components like human have uniqu
... Show MoreDisease diagnosis with computer-aided methods has been extensively studied and applied in diagnosing and monitoring of several chronic diseases. Early detection and risk assessment of breast diseases based on clinical data is helpful for doctors to make early diagnosis and monitor the disease progression. The purpose of this study is to exploit the Convolutional Neural Network (CNN) in discriminating breast MRI scans into pathological and healthy. In this study, a fully automated and efficient deep features extraction algorithm that exploits the spatial information obtained from both T2W-TSE and STIR MRI sequences to discriminate between pathological and healthy breast MRI scans. The breast MRI scans are preprocessed prior to the feature
... Show MoreCryptography can be thought of as a toolbox, where potential attackers gain access to various computing resources and technologies to try to compute key values. In modern cryptography, the strength of the encryption algorithm is only determined by the size of the key. Therefore, our goal is to create a strong key value that has a minimum bit length that will be useful in light encryption. Using elliptic curve cryptography (ECC) with Rubik's cube and image density, the image colors are combined and distorted, and by using the Chaotic Logistics Map and Image Density with a secret key, the Rubik's cubes for the image are encrypted, obtaining a secure image against attacks. ECC itself is a powerful algorithm that generates a pair of p
... Show More