One of the most popular and legally recognized behavioral biometrics is the individual's signature, which is used for verification and identification in many different industries, including business, law, and finance. The purpose of the signature verification method is to distinguish genuine from forged signatures, a task complicated by cultural and personal variances. Analysis, comparison, and evaluation of handwriting features are performed in forensic handwriting analysis to establish whether or not the writing was produced by a known writer. In contrast to other languages, Arabic makes use of diacritics, ligatures, and overlaps that are unique to it. Due to the absence of dynamic information in the writing of Arabic signatures, it will be more difficult to attain greater verification accuracy. On the other hand, the characteristics of Arabic signatures are not very clear and are subject to a great deal of variation (features’ uncertainty). To address this issue, the suggested work offers a novel method of verifying offline Arabic signatures that employs two layers of verification, as opposed to the one level employed by prior attempts or the many classifiers based on statistical learning theory. A static set of signature features is used for layer one verification. The output of a neutrosophic logic module is used for layer two verification, with the accuracy depending on the signature characteristics used in the training dataset and on three membership functions that are unique to each signer based on the degree of truthiness, indeterminacy, and falsity of the signature features. The three memberships of the neutrosophic set are more expressive for decision-making than those of the fuzzy sets. The purpose of the developed model is to account for several kinds of uncertainty in describing Arabic signatures, including ambiguity, inconsistency, redundancy, and incompleteness. The experimental results show that the verification system works as intended and can successfully reduce the FAR and FRR.
Abstract
The objective of image fusion is to merge multiple sources of images together in such a way that the final representation contains higher amount of useful information than any input one.. In this paper, a weighted average fusion method is proposed. It depends on using weights that are extracted from source images using counterlet transform. The extraction method is done by making the approximated transformed coefficients equal to zero, then taking the inverse counterlet transform to get the details of the images to be fused. The performance of the proposed algorithm has been verified on several grey scale and color test images, and compared with some present methods.
... Show MoreThe present study discusses the problem based learning in Iraqi classroom. This method aims to involve all learners in collaborative activities and it is learner-centered method. To fulfill the aims and verify the hypothesis which reads as follow” It is hypothesized that there is no statistically significant differences between the achievements of Experimental group and control group”. Thirty learners are selected to be the sample of present study.Mann-Whitney Test for two independent samples is used to analysis the results. The analysis shows that experimental group’s members who are taught according to problem based learning gets higher scores than the control group’s members who are taught according to traditional method. This
... Show MoreA simple all optical fiber sensor based on multimode interference (MMI) for chemical liquids sensing was designed and fabricated. A segment of coreless fiber (CF) was spliced between two single mode fibers to buildup single mode-coreless-single mode (SCS) structure. Broadband source and optical signal analyzer were connected to the ends of SCS structure. De-ionized water, acetone, and n-hexane were used to test the performance of the sensor. Two influence factors on the sensitivity namely the length and the diameter of the CF were investigated. The obtained maximum sensitivity was at n-hexane at 340.89 nm/RIU (at a wavelength resolution of the optical spectrum analyzer of 0.02 nm) when the diameter of the CF reduced from 125 μm to 60 μ
... Show MoreDigital Elevation Model (DEM) is one of the developed techniques for relief representation. The definition of a DEM construction is the modeling technique of earth surface from existing data. DEM plays a role as one of the fundamental information requirement that has been generally utilized in GIS data structures. The main aim of this research is to present a methodology for assessing DEMs generation methods. The DEMs data will be extracted from open source data e.g. Google Earth. The tested data will be compared with data produced from formal institutions such as General Directorate of Surveying. The study area has been chosen in south of Iraq (Al-Gharraf / Dhi Qar governorate. The methods of DEMs creation are kri
... Show MoreIn this work we present a detailed study on anisotype nGe-pSi heterojunction (HJ) used as photodetector in the wavelength range (500-1100 nm). I-V characteristics in the dark and under illumination, C-V characteristics, minority carriers lifetime (MCLT), spectral responsivity, field of view, and linearity were investigated at 300K. The results showed that the detector has maximum spectral responsivity at λ=950 nm. The photo-induced open circuit voltage decay results revealed that the MCLT of HJ was around 14.4 μs
Polymer electrolytes were prepared using the solution cast technology. Under some conditions, the electrolyte content of polymers was analyzed in constant percent of PVA/PVP (50:50), ethylene carbonate (EC), and propylene carbonate (PC) (1:1) with different proportions of potassium iodide (KI) (10, 20, 30, 40, 50 wt%) and iodine (I2) = 10 wt% of salt. Fourier Transmission Infrared (FTIR) studies confirmed the complex formation of polymer blends. Electrical conductivity was calculated with an impedance analyzer in the frequency range 50 Hz–1MHz and in the temperature range 293–343 K. The highest electrical conductivity value of 5.3 × 10-3 (S/cm) was observed for electrolytes with 50 wt% KI concentration at room
... Show MoreData steganography is a technique used to hide data, secret message, within another data, cover carrier. It is considered as a part of information security. Audio steganography is a type of data steganography, where the secret message is hidden in audio carrier. This paper proposes an efficient audio steganography method that uses LSB technique. The proposed method enhances steganography performance by exploiting all carrier samples and balancing between hiding capacity and distortion ratio. It suggests an adaptive number of hiding bits for each audio sample depending on the secret message size, the cover carrier size, and the signal to noise ratio (SNR). Comparison results show that the proposed method outperforms state of the art methods
... Show MoreAn oil spill is a leakage of pipelines, vessels, oil rigs, or tankers that leads to the release of petroleum products into the marine environment or on land that happened naturally or due to human action, which resulted in severe damages and financial loss. Satellite imagery is one of the powerful tools currently utilized for capturing and getting vital information from the Earth's surface. But the complexity and the vast amount of data make it challenging and time-consuming for humans to process. However, with the advancement of deep learning techniques, the processes are now computerized for finding vital information using real-time satellite images. This paper applied three deep-learning algorithms for satellite image classification
... Show MoreMost recognition system of human facial emotions are assessed solely on accuracy, even if other performance criteria are also thought to be important in the evaluation process such as sensitivity, precision, F-measure, and G-mean. Moreover, the most common problem that must be resolved in face emotion recognition systems is the feature extraction methods, which is comparable to traditional manual feature extraction methods. This traditional method is not able to extract features efficiently. In other words, there are redundant amount of features which are considered not significant, which affect the classification performance. In this work, a new system to recognize human facial emotions from images is proposed. The HOG (Histograms of Or
... Show More