Iris recognition occupies an important rank among the biometric types of approaches as a result of its accuracy and efficiency. The aim of this paper is to suggest a developed system for iris identification based on the fusion of scale invariant feature transforms (SIFT) along with local binary patterns of features extraction. Several steps have been applied. Firstly, any image type was converted to grayscale. Secondly, localization of the iris was achieved using circular Hough transform. Thirdly, the normalization to convert the polar value to Cartesian using Daugman’s rubber sheet models, followed by histogram equalization to enhance the iris region. Finally, the features were extracted by utilizing the scale invariant feature transformation and local binary pattern. Some sigma and threshold values were used for feature extraction, which achieved the highest rate of recognition. The programming was implemented by using MATLAB 2013. The matching was performed by applying the city block distance. The iris recognition system was built with the use of iris images for 30 individuals in the CASIA v4. 0 database. Every individual has 20 captures for left and right, with a total of 600 pictures. The main findings showed that the values of recognition rates in the proposed system are 98.67% for left eyes and 96.66% for right eyes, among thirty subjects.
A total of 41 patients with gastro duodenal symptoms (show signs of inflammation with or without duodenal ulcer) . 21 males (51.2%) and 20 female (48.8%) with an average age 0f (20 – 80) years old under going gastrointestinal endoscopy at Baghdad teaching hospital in internal disease clinical laboratory , between (February – June) 2009 . Biopsies specimen of antrum , gastric fundus ,& duodenal bulb were examined by the following methods (rapid urease test , Giemsa stain section to detect bacteria , & Haematoxilin and Eosin stained section for pathological study which are considered the gold standard methods , sera or plasma from these patients were tested by immunochromotography (ICM),serological m
... Show MoreThe liver diseases can define as the tumor or disorder that can affect the liver and causes deformation in its shape. The early detection and diagnose of the tumor using CT medical images, helps the detector to specify the tumor perfectly. This search aims to detect and classify the liver tumor depending on the use of a computer (image processing and textural analysis) helps in getting an accurate diagnosis. The methods which are used in this search depend on creating a binary mask used to separate the liver from the origins of the other in the CT images. The threshold has been used as an early segmentation. A Process, the watershed process is used as a classification technique to isolate the tumor which is cancer and cyst.
 
... Show MoreFace detection is one of the important applications of biometric technology and image processing. Convolutional neural networks (CNN) have been successfully used with great results in the areas of image processing as well as pattern recognition. In the recent years, deep learning techniques specifically CNN techniques have achieved marvellous accuracy rates on face detection field. Therefore, this study provides a comprehensive analysis of face detection research and applications that use various CNN methods and algorithms. This paper presents ten of the most recent studies and illustrate the achieved performance of each method.
The secure data transmission over internet is achieved using Steganography. It is the art and science of concealing information in unremarkable cover media so as not to arouse an observer’s suspicion. In this paper the color cover image is divided into equally four parts, for each part select one channel from each part( Red, or Green, or Blue), choosing one of these channel depending on the high color ratio in that part. The chosen part is decomposing into four parts {LL, HL, LH, HH} by using discrete wavelet transform. The hiding image is divided into four part n*n then apply DCT on each part. Finally the four DCT coefficient parts embedding in four high frequency sub-bands {HH} in
... Show MoreBackground: The present article is concerned within the scope of Forensic Medicine, specifically Forensic Genetics. The case was taken care of in the Genetic-Molecular Laboratory of the Odessa Regional Bureau of Forensic-Medical Examinations, in Ukraine, during January and February of 2014.
Objectives: The aim of our work was to identify an unknown person, using the techniques: Y-chromosome markers and mitochondrial DNA typing.
Materials and methods: The materials available for our procedure were: pieces of tissue in paraffin blocks, saved from the corpse of the unknown person; blood from a living male subject, who claimed to be the grandfather, and from two females, allegedly the sisters. From all of them we extracted nuclear DNA
Cryptography is a method used to mask text based on any encryption method, and the authorized user only can decrypt and read this message. An intruder tried to attack in many manners to access the communication channel, like impersonating, non-repudiation, denial of services, modification of data, threatening confidentiality and breaking availability of services. The high electronic communications between people need to ensure that transactions remain confidential. Cryptography methods give the best solution to this problem. This paper proposed a new cryptography method based on Arabic words; this method is done based on two steps. Where the first step is binary encoding generation used t
... Show MorePredicting the network traffic of web pages is one of the areas that has increased focus in recent years. Modeling traffic helps find strategies for distributing network loads, identifying user behaviors and malicious traffic, and predicting future trends. Many statistical and intelligent methods have been studied to predict web traffic using time series of network traffic. In this paper, the use of machine learning algorithms to model Wikipedia traffic using Google's time series dataset is studied. Two data sets were used for time series, data generalization, building a set of machine learning models (XGboost, Logistic Regression, Linear Regression, and Random Forest), and comparing the performance of the models using (SMAPE) and
... Show MoreSpeech is the essential way to interact between humans or between human and machine. However, it is always contaminated with different types of environment noise. Therefore, speech enhancement algorithms (SEA) have appeared as a significant approach in speech processing filed to suppress background noise and return back the original speech signal. In this paper, a new efficient two-stage SEA with low distortion is proposed based on minimum mean square error sense. The estimation of clean signal is performed by taking the advantages of Laplacian speech and noise modeling based on orthogonal transform (Discrete Krawtchouk-Tchebichef transform) coefficients distribution. The Discrete Kra