One of the most popular and legally recognized behavioral biometrics is the individual's signature, which is used for verification and identification in many different industries, including business, law, and finance. The purpose of the signature verification method is to distinguish genuine from forged signatures, a task complicated by cultural and personal variances. Analysis, comparison, and evaluation of handwriting features are performed in forensic handwriting analysis to establish whether or not the writing was produced by a known writer. In contrast to other languages, Arabic makes use of diacritics, ligatures, and overlaps that are unique to it. Due to the absence of dynamic information in the writing of Arabic signatures, it will be more difficult to attain greater verification accuracy. On the other hand, the characteristics of Arabic signatures are not very clear and are subject to a great deal of variation (features’ uncertainty). To address this issue, the suggested work offers a novel method of verifying offline Arabic signatures that employs two layers of verification, as opposed to the one level employed by prior attempts or the many classifiers based on statistical learning theory. A static set of signature features is used for layer one verification. The output of a neutrosophic logic module is used for layer two verification, with the accuracy depending on the signature characteristics used in the training dataset and on three membership functions that are unique to each signer based on the degree of truthiness, indeterminacy, and falsity of the signature features. The three memberships of the neutrosophic set are more expressive for decision-making than those of the fuzzy sets. The purpose of the developed model is to account for several kinds of uncertainty in describing Arabic signatures, including ambiguity, inconsistency, redundancy, and incompleteness. The experimental results show that the verification system works as intended and can successfully reduce the FAR and FRR.
The Artificial Neural Network methodology is a very important & new subjects that build's the models for Analyzing, Data Evaluation, Forecasting & Controlling without depending on an old model or classic statistic method that describe the behavior of statistic phenomenon, the methodology works by simulating the data to reach a robust optimum model that represent the statistic phenomenon & we can use the model in any time & states, we used the Box-Jenkins (ARMAX) approach for comparing, in this paper depends on the received power to build a robust model for forecasting, analyzing & controlling in the sod power, the received power come from
... Show MoreIn this research we solved numerically Boltzmann transport equation in order to calculate the transport parameters, such as, drift velocity, W, D/? (ratio of diffusion coefficient to the mobility) and momentum transfer collision frequency ?m, for purpose of determination of magnetic drift velocity WM and magnetic deflection coefficient ? for low energy electrons, that moves in the electric field E, crossed with magnetic field B, i.e; E×B, in the nitrogen, Argon, Helium and it's gases mixtures as a function of: E/N (ratio of electric field strength to the number density of gas), E/P300 (ratio of electric field strength to the gas pressure) and D/? which covered a different ranges for E/P300 at temperatures 300°k (Kelvin). The results show
... Show MoreIn many video and image processing applications, the frames are partitioned into blocks, which are extracted and processed sequentially. In this paper, we propose a fast algorithm for calculation of features of overlapping image blocks. We assume the features are projections of the block on separable 2D basis functions (usually orthogonal polynomials) where we benefit from the symmetry with respect to spatial variables. The main idea is based on a construction of auxiliary matrices that virtually extends the original image and makes it possible to avoid a time-consuming computation in loops. These matrices can be pre-calculated, stored and used repeatedly since they are independent of the image itself. We validated experimentally th
... Show MoreIn this study, we fabricated nanofiltration membranes using the electrospinning technique, employing pure PAN and a mixed matrix of PAN/HPMC. The PAN nanofibrous membranes with a concentration of 13wt% were prepared and blended with different concentrations of HPMC in the solvent N, N-Dimethylformamide (DMF). We conducted a comprehensive analysis of these membranes' surface morphology, chemical composition, wettability, and porosity and compared the results. The findings indicated that the inclusion of HPMC in the PAN membranes led to a reduction in surface porosity and fiber size. The contact angle decreased, indicating increased surface hydrophilicity, which can enhance flux and reduce fouling tendencies. Subsequently, we evaluated the e
... Show MoreHorizontal wells are of great interest to the petroleum industry today because they provide an attractive means for improving both production rate and recovery efficiency. The great improvements in drilling technology make it possible to drill horizontal wells with complex trajectories and extended for significant depths.
The aim of this paper is to present the design aspects of horizontal well. Well design aspects include selection of bit and casing sizes, detection of setting depths and drilling fluid density, casing, hydraulics, well profile, and construction of drillstring simulator. An Iraqi oil field (Ajeel field) is selected for designing horizontal well to increase the productivity. Short radius horizontal well is suggested fo
Krawtchouk polynomials (KPs) and their moments are promising techniques for applications of information theory, coding theory, and signal processing. This is due to the special capabilities of KPs in feature extraction and classification processes. The main challenge in existing KPs recurrence algorithms is that of numerical errors, which occur during the computation of the coefficients in large polynomial sizes, particularly when the KP parameter (p) values deviate away from 0.5 to 0 and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefficients of KPs in high orders. In particular, this paper discusses the development of a new algorithm and presents a new mathematical model for computing the
... Show MoreIn this paper, we propose a method using continuous wavelets to study the multivariate fractional Brownian motion through the deviations of the transformed random process to find an efficient estimate of Hurst exponent using eigenvalue regression of the covariance matrix. The results of simulations experiments shown that the performance of the proposed estimator was efficient in bias but the variance get increase as signal change from short to long memory the MASE increase relatively. The estimation process was made by calculating the eigenvalues for the variance-covariance matrix of Meyer’s continuous wavelet details coefficients.
Objectives To quantify the reproducibility of the drill calibration process in dynamic navigation guided placement of dental implants and to identify the human factors that could affect the precision of this process in order to improve the overall implant placement accuracy. Methods A set of six drills and four implants were calibrated by three operators following the standard calibration process of NaviDent® (ClaroNav Inc.). The reproducibility of the position of each tip of a drill or implant was calculated in relation to the pre-planned implants’ entry and apex positions. Intra- and inter-operator reliabilities were reported. The effects of the drill length and shape on the reproducibility of the calibration process were also investig
... Show More