The primary objective of this paper is to improve a biometric authentication and classification model using the ear as a distinct part of the face since it is unchanged with time and unaffected by facial expressions. The proposed model is a new scenario for enhancing ear recognition accuracy via modifying the AdaBoost algorithm to optimize adaptive learning. To overcome the limitation of image illumination, occlusion, and problems of image registration, the Scale-invariant feature transform technique was used to extract features. Various consecutive phases were used to improve classification accuracy. These phases are image acquisition, preprocessing, filtering, smoothing, and feature extraction. To assess the proposed system's performance. method, the classification accuracy has been compared using different types of classifiers. These classifiers are Naïve Bayesian, KNN, J48, and SVM. The range of the identification accuracy for all the processed databases using the proposed scenario is between (%93.8- %97.8). The system was executed using MATHLAB R2017, 2.10 GHz processor, and 4 GB RAM.
Software testing is a vital part of the software development life cycle. In many cases, the system under test has more than one input making the testing efforts for every exhaustive combination impossible (i.e. the time of execution of the test case can be outrageously long). Combinatorial testing offers an alternative to exhaustive testing via considering the interaction of input values for every t-way combination between parameters. Combinatorial testing can be divided into three types which are uniform strength interaction, variable strength interaction and input-output based relation (IOR). IOR combinatorial testing only tests for the important combinations selected by the tester. Most of the researches in combinatorial testing
... Show MoreText based-image clustering (TBIC) is an insufficient approach for clustering related web images. It is a challenging task to abstract the visual features of images with the support of textual information in a database. In content-based image clustering (CBIC), image data are clustered on the foundation of specific features like texture, colors, boundaries, shapes. In this paper, an effective CBIC) technique is presented, which uses texture and statistical features of the images. The statistical features or moments of colors (mean, skewness, standard deviation, kurtosis, and variance) are extracted from the images. These features are collected in a one dimension array, and then genetic algorithm (GA) is applied for image clustering.
... Show MoreMany studies have been published to address the growing issues in wireless communication systems. Space-Time Block Coding (STBC) is an effective and practical MIMO-OFDM application that can address such issues. It is a powerful tool for increasing wireless performance by coding data symbols and transmitting diversity using several antennas. The most significant challenge is to recover the transmitted signal through a time-varying multipath fading channel and obtain a precise channel estimation to recover the transmitted information symbols. This work considers different pilot patterns for channel estimation and equalization in MIMO-OFDM systems. The pilot patterns fall under two general types: comb and block types, with
... Show MoreFatigue cracking is the most common distress in road pavement. It is mainly due to the increase in the number of load repetition of vehicles, particularly those with high axle loads, and to the environmental conditions. In this study, four-point bending beam fatigue testing has been used for control and modified mixture under various micro strain levels of (250 μƐ, 400 μƐ, and 750 μƐ) and 5HZ. The main objective of the study is to provide a comparative evaluation of pavement resistance to the phenomenon of fatigue cracking between modified asphalt concrete and conventional asphalt concrete mixes (under the influence of three percentage of Silica fumes 1%, 2%, 3% by the weight of asphalt content), and (chan
... Show MoreIn this paper, the Azzallini’s method used to find a weighted distribution derived from the standard Pareto distribution of type I (SPDTI) by inserting the shape parameter (θ) resulting from the above method to cover the period (0, 1] which was neglected by the standard distribution. Thus, the proposed distribution is a modification to the Pareto distribution of the first type, where the probability of the random variable lies within the period The properties of the modified weighted Pareto distribution of the type I (MWPDTI) as the probability density function ,cumulative distribution function, Reliability function , Moment and the hazard function are found. The behaviour of probability density function for MWPDTI distrib
... Show MoreThe important aspect of this unconventional approach is that eco-friendly, commercially available and straight forward method was used to prepared Silver Nanoparticles by using AgNO3 and curcumin solution as agent factor. The (TEM), (XRD), and (FTIR) was used to characterise these silver nanoparticles (AgNPs). Two types of bacterial isolates were used to indicate the antibacterial activity silver nanoparticles which prepared by curcumin solution, Gram negative like (Escherichia Coli E. Coli), & Gram positive (Stapha Urous). The results exhibit that silver nanoparticles synthesized by curcumin solution has effective antibacterial activities.
In this article, we aim to define a universal set consisting of the subscripts of the fuzzy differential equation (5) except the two elements and , subsets of that universal set are defined according to certain conditions. Then, we use the constructed universal set with its subsets for suggesting an analytical method which facilitates solving fuzzy initial value problems of any order by using the strongly generalized H-differentiability. Also, valid sets with graphs for solutions of fuzzy initial value problems of higher orders are found.
A common problem facing many Application models is to extract and combine information from multiple, heterogeneous sources and to derive information of a new quality or abstraction level. New approaches for managing consistency, uncertainty or quality of Arabic data and enabling e-client analysis of distributed, heterogeneous sources are still required. This paper presents a new method by combining two algorithms (the partitioning and Grouping) that will be used to transform information in a real time heterogeneous Arabic database environment