Iris recognition occupies an important rank among the biometric types of approaches as a result of its accuracy and efficiency. The aim of this paper is to suggest a developed system for iris identification based on the fusion of scale invariant feature transforms (SIFT) along with local binary patterns of features extraction. Several steps have been applied. Firstly, any image type was converted to grayscale. Secondly, localization of the iris was achieved using circular Hough transform. Thirdly, the normalization to convert the polar value to Cartesian using Daugman’s rubber sheet models, followed by histogram equalization to enhance the iris region. Finally, the features were extracted by utilizing the scale invariant feature transformation and local binary pattern. Some sigma and threshold values were used for feature extraction, which achieved the highest rate of recognition. The programming was implemented by using MATLAB 2013. The matching was performed by applying the city block distance. The iris recognition system was built with the use of iris images for 30 individuals in the CASIA v4. 0 database. Every individual has 20 captures for left and right, with a total of 600 pictures. The main findings showed that the values of recognition rates in the proposed system are 98.67% for left eyes and 96.66% for right eyes, among thirty subjects.
Due to advancements in computer science and technology, impersonation has become more common. Today, biometrics technology is widely used in various aspects of people's lives. Iris recognition, known for its high accuracy and speed, is a significant and challenging field of study. As a result, iris recognition technology and biometric systems are utilized for security in numerous applications, including human-computer interaction and surveillance systems. It is crucial to develop advanced models to combat impersonation crimes. This study proposes sophisticated artificial intelligence models with high accuracy and speed to eliminate these crimes. The models use linear discriminant analysis (LDA) for feature extraction and mutual info
... Show MoreBreast cancer is one of the most common malignant diseases among women;
Mammography is at present one of the available method for early detection of
abnormalities which is related to breast cancer. There are different lesions that are
breast cancer characteristic such as masses and calcifications which can be detected
trough this technique. This paper proposes a computer aided diagnostic system for
the extraction of features like masses and calcifications lesions in mammograms for
early detection of breast cancer. The proposed technique is based on a two-step
procedure: (a) unsupervised segmentation method includes two stages performed
using the minimum distance (MD) criterion, (b) feature extraction based on Gray
DeepFake is a concern for celebrities and everyone because it is simple to create. DeepFake images, especially high-quality ones, are difficult to detect using people, local descriptors, and current approaches. On the other hand, video manipulation detection is more accessible than an image, which many state-of-the-art systems offer. Moreover, the detection of video manipulation depends entirely on its detection through images. Many worked on DeepFake detection in images, but they had complex mathematical calculations in preprocessing steps, and many limitations, including that the face must be in front, the eyes have to be open, and the mouth should be open with the appearance of teeth, etc. Also, the accuracy of their counterfeit detectio
... Show MoreEdge computing is proved to be an effective solution for the Internet of Things (IoT)-based systems. Bringing the resources closer to the end devices has improved the performance of the networks and reduced the load on the cloud. On the other hand, edge computing has some constraints related to the amount of the resources available on the edge servers, which is considered to be limited as compared with the cloud. In this paper, we propose Software-Defined Networking (SDN)-based resources allocation and service placement system in the multi-edge networks that serve multiple IoT applications. In this system, the resources of the edge servers are monitored using the proposed Edge Server Application (ESA) to determine the state of the edge s
... Show MoreSurface water samples from different locations within Tigris River's boundaries in Baghdad city have been analyzed for drinking purposes. Correlation coefficients among different parameters were determined. An attempt has been made to develop linear regression equations to predict the concentration of water quality constituents having significant correlation coefficients with electrical conductivity (EC). This study aims to find five regression models produced and validated using electrical conductivity as a predictor to predict total hardness (TH), calcium (Ca), chloride (Cl), sulfate (SO4), and total dissolved solids (TDS). The five models showed good/excellent prediction ability of the parameters mentioned above, which is a very
... Show MoreSurface water samples from different locations within Tigris River's boundaries in Baghdad city have been analyzed for drinking purposes. Correlation coefficients among different parameters were determined. An attempt has been made to develop linear regression equations to predict the concentration of water quality constituents having significant correlation coefficients with electrical conductivity (EC). This study aims to find five regression models produced and validated using electrical conductivity as a predictor to predict total hardness (TH), calcium (Ca), chloride (Cl), sulfate (SO4), and total dissolved solids (TDS). The five models showed good/excellent prediction ability of the parameters mentioned
... Show MoreClassification of imbalanced data is an important issue. Many algorithms have been developed for classification, such as Back Propagation (BP) neural networks, decision tree, Bayesian networks etc., and have been used repeatedly in many fields. These algorithms speak of the problem of imbalanced data, where there are situations that belong to more classes than others. Imbalanced data result in poor performance and bias to a class without other classes. In this paper, we proposed three techniques based on the Over-Sampling (O.S.) technique for processing imbalanced dataset and redistributing it and converting it into balanced dataset. These techniques are (Improved Synthetic Minority Over-Sampling Technique (Improved SMOTE), Border
... Show MoreOpportunistic fungal infections due to the immune- compromised status of renal transplant patients are related to high rates of morbidity and mortality regardless of their minor incidence. Delayed in identification of invasive fungal infections (IFIs), will lead to delayed treatment and results in high mortality in those populations. The study aimed to assess the frequency of invasive fungal infection in kidney transplant recipients by conventional and molecular methods. This study included 100 kidney transplant recipients (KTR) (75 males, and 25 females), collected from the Centre of Kidney Diseases and Transplantation in the Medical City of Baghdad. Blood samples were collected during the period from June 2018 to April 2019. Twent
... Show MoreA new data for Fusion power density has been obtained for T-3He and T-T fusion reactions, power density is a substantial term in the researches related to the fusion energy generation and ignition calculations of magnetic confined systems. In the current work, thermal nuclear reactivities, power densities of a fusion reactors and the ignition condition inquiry are achieved by using a new and accurate formula of cross section, the maximum values of fusion power density for T-3He and TT reaction are 1.1×107 W/m3 at T=700 KeV and 4.7×106 W/m3 at T=500 KeV respectively, While Zeff suggested to be 1.44 for the two reactions. Bremsstrahlung radiation has also been determined to reaching self- sustaining reactors, Bremsstrahlung values are 4.5×
... Show MoreGovernmental establishments are maintaining historical data for job applicants for future analysis of predication, improvement of benefits, profits, and development of organizations and institutions. In e-government, a decision can be made about job seekers after mining in their information that will lead to a beneficial insight. This paper proposes the development and implementation of an applicant's appropriate job prediction system to suit his or her skills using web content classification algorithms (Logit Boost, j48, PART, Hoeffding Tree, Naive Bayes). Furthermore, the results of the classification algorithms are compared based on data sets called "job classification data" sets. Experimental results indicate
... Show More