This investigation proposed an identification system of offline signature by utilizing rotation compensation depending on the features that were saved in the database. The proposed system contains five principle stages, they are: (1) data acquisition, (2) signature data file loading, (3) signature preprocessing, (4) feature extraction, and (5) feature matching. The feature extraction includes determination of the center point coordinates, and the angle for rotation compensation (θ), implementation of rotation compensation, determination of discriminating features and statistical condition. During this work seven essential collections of features are utilized to acquire the characteristics: (i) density (D), (ii) average (A), (iii) standard deviation (S) and integrated between them (iv) density and average (DA), (v) density and standard deviation (DS), (vi) average and standard deviation (AS), and finally (vii) density with average and standard deviation (DAS). The determined values of features are assembled in a feature vector used to distinguish signatures belonging to different persons. The utilized two Euclidean distance measures for matching stage are: (i) normalized mean absolute distance (nMAD) (ii) normalized mean squared distance (nMSD). The suggested system is tested by a public dataset collect from 612 images of handwritten signatures. The best recognition rate (i.e., 98.9%) is achieved in the proposed system using number of blocks (21×21) in density feature set. With the same number of blocks (i.e., 21×21) the maximum verification accuracy obtained is (100%).
A hand gesture recognition system provides a robust and innovative solution to nonverbal communication through human–computer interaction. Deep learning models have excellent potential for usage in recognition applications. To overcome related issues, most previous studies have proposed new model architectures or have fine-tuned pre-trained models. Furthermore, these studies relied on one standard dataset for both training and testing. Thus, the accuracy of these studies is reasonable. Unlike these works, the current study investigates two deep learning models with intermediate layers to recognize static hand gesture images. Both models were tested on different datasets, adjusted to suit the dataset, and then trained under different m
... Show MoreThe city of Samawah is one of the most important cities which emerged in the poverty area within the poverty map produced by the Ministry of Planning, despite being an important provincial centre. Although it has great development potentials, it was neglected for more than 50 years,. This dereliction has caused a series of negative accumulations at the urban levels (environmental, social and economic). Therefore, the basic idea of this research is to detect part of these challenges that are preventing growth and development of the city. The methodology of the research is to extrapolate the reality with the analysis of the results, data and environmental impact assessment of the projec
Human interaction technology based on motion capture (MoCap) systems is a vital tool for human kinematics analysis, with applications in clinical settings, animations, and video games. We introduce a new method for analyzing and estimating dorsal spine movement using a MoCap system. The captured data by the MoCap system are processed and analyzed to estimate the motion kinematics of three primary regions; the shoulders, spine, and hips. This work contributes a non-invasive and anatomically guided framework that enables region-specific analysis of spinal motion which could be used as a clinical alternative to invasive measurement techniques. The hierarchy of our model consists of five main levels; motion capture system settings, marker data
... Show More
XML is being incorporated into the foundation of E-business data applications. This paper addresses the problem of the freeform information that stored in any organization and how XML with using this new approach will make the operation of the search very efficient and time consuming. This paper introduces new solution and methodology that has been developed to capture and manage such unstructured freeform information (multi information) depending on the use of XML schema technologies, neural network idea and object oriented relational database, in order to provide a practical solution for efficiently management multi freeform information system.
The area of character recognition has received a considerable attention by researchers all over the world during the last three decades. However, this research explores best sets of feature extraction techniques and studies the accuracy of well-known classifiers for Arabic numeral using the Statistical styles in two methods and making comparison study between them. First method Linear Discriminant function that is yield results with accuracy as high as 90% of original grouped cases correctly classified. In the second method, we proposed algorithm, The results show the efficiency of the proposed algorithms, where it is found to achieve recognition accuracy of 92.9% and 91.4%. This is providing efficiency more than the first method.
Methods of speech recognition have been the subject of several studies over the past decade. Speech recognition has been one of the most exciting areas of the signal processing. Mixed transform is a useful tool for speech signal processing; it is developed for its abilities of improvement in feature extraction. Speech recognition includes three important stages, preprocessing, feature extraction, and classification. Recognition accuracy is so affected by the features extraction stage; therefore different models of mixed transform for feature extraction were proposed. The properties of the recorded isolated word will be 1-D, which achieve the conversion of each 1-D word into a 2-D form. The second step of the word recognizer requires, the
... Show MoreRecognizing speech emotions is an important subject in pattern recognition. This work is about studying the effect of extracting the minimum possible number of features on the speech emotion recognition (SER) system. In this paper, three experiments performed to reach the best way that gives good accuracy. The first one extracting only three features: zero crossing rate (ZCR), mean, and standard deviation (SD) from emotional speech samples, the second one extracting only the first 12 Mel frequency cepstral coefficient (MFCC) features, and the last experiment applying feature fusion between the mentioned features. In all experiments, the features are classified using five types of classification techniques, which are the Random Forest (RF),
... Show MoreDocument analysis of images snapped by camera is a growing challenge. These photos are often poor-quality compound images, composed of various objects and text; this makes automatic analysis complicated. OCR is one of the image processing techniques which is used to perform automatic identification of texts. Existing image processing techniques need to manage many parameters in order to clearly recognize the text in such pictures. Segmentation is regarded one of these essential parameters. This paper discusses the accuracy of segmentation process and its effect over the recognition process. According to the proposed method, the images were firstly filtered using the wiener filter then the active contour algorithm could b
... Show MoreA comparison of double informative and non- informative priors assumed for the parameter of Rayleigh distribution is considered. Three different sets of double priors are included, for a single unknown parameter of Rayleigh distribution. We have assumed three double priors: the square root inverted gamma (SRIG) - the natural conjugate family of priors distribution, the square root inverted gamma – the non-informative distribution, and the natural conjugate family of priors - the non-informative distribution as double priors .The data is generating form three cases from Rayleigh distribution for different samples sizes (small, medium, and large). And Bayes estimators for the parameter is derived under a squared erro
... Show MorePurpose: To compare the central corneal thickness (CCT),minimum corneal thickness (MCT) and corneal power measured using theScheimpflug-Placido device and optical coherence tomography (OCT) in healthy eyes. Study Design: Descriptive observational. Place and Duration of Study: Al-Kindy college of medicine/university of Baghdad, from June 2021 to April 2022. Methods: A total of 200 eyes of 200 individuals were enrolled in this study. CCT and MCT measurements were carried out using spectral-domain optical coherence tomography (Optovue) and a Scheimpflug-Placido topographer (Sirius).The agreement between the two approaches was assessed using Bland-Altman analysis in this study. Results: Mean age was 28.54 ± 6.6 years, me
... Show More