One of the most popular and legally recognized behavioral biometrics is the individual's signature, which is used for verification and identification in many different industries, including business, law, and finance. The purpose of the signature verification method is to distinguish genuine from forged signatures, a task complicated by cultural and personal variances. Analysis, comparison, and evaluation of handwriting features are performed in forensic handwriting analysis to establish whether or not the writing was produced by a known writer. In contrast to other languages, Arabic makes use of diacritics, ligatures, and overlaps that are unique to it. Due to the absence of dynamic information in the writing of Arabic signatures, it will be more difficult to attain greater verification accuracy. On the other hand, the characteristics of Arabic signatures are not very clear and are subject to a great deal of variation (features’ uncertainty). To address this issue, the suggested work offers a novel method of verifying offline Arabic signatures that employs two layers of verification, as opposed to the one level employed by prior attempts or the many classifiers based on statistical learning theory. A static set of signature features is used for layer one verification. The output of a neutrosophic logic module is used for layer two verification, with the accuracy depending on the signature characteristics used in the training dataset and on three membership functions that are unique to each signer based on the degree of truthiness, indeterminacy, and falsity of the signature features. The three memberships of the neutrosophic set are more expressive for decision-making than those of the fuzzy sets. The purpose of the developed model is to account for several kinds of uncertainty in describing Arabic signatures, including ambiguity, inconsistency, redundancy, and incompleteness. The experimental results show that the verification system works as intended and can successfully reduce the FAR and FRR.
In this study an experimental work was done to study the possibility of using aluminum rubbish material as a coagulant to remove the colloidal particles from oily wastewater by dissolving this rubbish in sodium hydroxide solution. The experiments were carried out on simulated oily wastewater that was prepared at different oil concentrations and hardness levels (50, 250, 500, and 1000) ppm oil for (2000, 2500, 3000, and 3500) ppm CaCo3 respectively. The initial turbidity values were (203, 290, 770, and 1306) NTU, while the minimum values of turbidity that have been gained from the experiments in NTU units were (1.67, 1.95, 2.10, and 4.01) at best sodium aluminate dosages in milliliters (12, 20, 24, and 28) for
... Show MoreGraph is a tool that can be used to simplify and solve network problems. Domination is a typical network problem that graph theory is well suited for. A subset of nodes in any network is called dominating if every node is contained in this subset, or is connected to a node in it via an edge. Because of the importance of domination in different areas, variant types of domination have been introduced according to the purpose they are used for. In this paper, two domination parameters the first is the restrained and the second is secure domination have been chosn. The secure domination, and some types of restrained domination in one type of trees is called complete ary tree are determined.
In light of accelerating environmental degradation, the transition to a green economy is an imperative for achieving sustainable development. This study provides a critical analysis of the international legal and institutional framework governing this transition, revealing a significant gap between normative developments and the institutional framework on one hand, and their practical implementation on the other. The transition faces legal obstacles, including reliance on non-binding voluntary commitments and conflicts between environmental obligations and global trade and investment rules. It also reveals a significant financing gap, as financial flows to developing countries continue to lag behind commitments, in add
... Show MoreKE Sharquie, AA Noaimi, RA Flayih, Am J Clin Res Rev, 2020 - Cited by 4
The synthesized ligand (3-(2-amino-5-(3,4,5-tri-methoxybenzyl)pyrimidin-4-ylamino)-5,5-dimethylcyclohex-2-enone] [H1L1] was characterized via fourier transform infrared spectroscopy (FTIR), 1H, 13C – NMR, Mass spectra, (CHN analysis), UV-vis spectroscopic approaches. Analytical and spectroscopic techniques like chloride content, micro-analysis, magnetic susceptibility UV-visible, conductance, and FTIR spectra were used to identify mixed ligand complexes. Its (ML13ph) mixed ligand complexes [M= Co (II), Ni (II), Cu (II), Zn (II), and Cd (II); (H1L1) = β-enaminone ligand=L1 and (3ph) =3-aminophenol= L2]. The results demonstrate that the complexes are produced with a molar ratio of M: L1:L2 (1:1:1). To generate the appropriate compl
... Show MoreA frequently used approach for denoising is the shrinkage of coefficients of the noisy signal representation in a transform domain. This paper proposes an algorithm based on hybrid transform (stationary wavelet transform proceeding by slantlet transform); The slantlet transform is applied to the approximation subband of the stationary wavelet transform. BlockShrink thresholding technique is applied to the hybrid transform coefficients. This technique can decide the optimal block size and thresholding for every wavelet subband by risk estimate (SURE). The proposed algorithm was executed by using MATLAB R2010aminimizing Stein’s unbiased with natural images contaminated by white Gaussian noise. Numerical results show that our algorithm co
... Show MoreImage recognition is one of the most important applications of information processing, in this paper; a comparison between 3-level techniques based image recognition has been achieved, using discrete wavelet (DWT) and stationary wavelet transforms (SWT), stationary-stationary-stationary (sss), stationary-stationary-wavelet (ssw), stationary-wavelet-stationary (sws), stationary-wavelet-wavelet (sww), wavelet-stationary- stationary (wss), wavelet-stationary-wavelet (wsw), wavelet-wavelet-stationary (wws) and wavelet-wavelet-wavelet (www). A comparison between these techniques has been implemented. according to the peak signal to noise ratio (PSNR), root mean square error (RMSE), compression ratio (CR) and the coding noise e (n) of each third
... Show MoreThe feature extraction step plays major role for proper object classification and recognition, this step depends mainly on correct object detection in the given scene, the object detection algorithms may result with some noises that affect the final object shape, a novel approach is introduced in this paper for filling the holes in that object for better object detection and for correct feature extraction, this method is based on the hole definition which is the black pixel surrounded by a connected boundary region, and hence trying to find a connected contour region that surrounds the background pixel using roadmap racing algorithm, the method shows a good results in 2D space objects.
Keywords: object filling, object detection, objec