Due to advancements in computer science and technology, impersonation has become more common. Today, biometrics technology is widely used in various aspects of people's lives. Iris recognition, known for its high accuracy and speed, is a significant and challenging field of study. As a result, iris recognition technology and biometric systems are utilized for security in numerous applications, including human-computer interaction and surveillance systems. It is crucial to develop advanced models to combat impersonation crimes. This study proposes sophisticated artificial intelligence models with high accuracy and speed to eliminate these crimes. The models use linear discriminant analysis (LDA) for feature extraction and mutual information (MI), along with analysis of variance (ANOVA) for feature selection. Two iris classification systems were developed: one using LDA as an input for the OneR machine learning algorithm and another innovative hybrid model based on a One Dimensional Convolutional Neural Network (HM-1DCNN). The MMU database was employed, achieving a performance measure of 94.387% accuracy for the OneR model. Additionally, the HM-1DCNN model achieved 99.9% accuracy by integrating LDA with MI and ANOVA. Comparisons with previous studies show that the HM-1DCNN model performs exceptionally well, with at least 1.69% higher accuracy and lower processing time.
The dependable and efficient identification of Qin seal script characters is pivotal in the discovery, preservation, and inheritance of the distinctive cultural values embodied by these artifacts. This paper uses image histograms of oriented gradients (HOG) features and an SVM model to discuss a character recognition model for identifying partial and blurred Qin seal script characters. The model achieves accurate recognition on a small, imbalanced dataset. Firstly, a dataset of Qin seal script image samples is established, and Gaussian filtering is employed to remove image noise. Subsequently, the gamma transformation algorithm adjusts the image brightness and enhances the contrast between font structures and image backgrounds. After a s
... Show MoreA mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the
... Show MoreA mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the others
... Show MoreThe issue of image captioning, which comprises automatic text generation to understand an image’s visual information, has become feasible with the developments in object recognition and image classification. Deep learning has received much interest from the scientific community and can be very useful in real-world applications. The proposed image captioning approach involves the use of Convolution Neural Network (CNN) pre-trained models combined with Long Short Term Memory (LSTM) to generate image captions. The process includes two stages. The first stage entails training the CNN-LSTM models using baseline hyper-parameters and the second stage encompasses training CNN-LSTM models by optimizing and adjusting the hyper-parameters of
... Show MoreIntrusion detection systems detect attacks inside computers and networks, where the detection of the attacks must be in fast time and high rate. Various methods proposed achieved high detection rate, this was done either by improving the algorithm or hybridizing with another algorithm. However, they are suffering from the time, especially after the improvement of the algorithm and dealing with large traffic data. On the other hand, past researches have been successfully applied to the DNA sequences detection approaches for intrusion detection system; the achieved detection rate results were very low, on other hand, the processing time was fast. Also, feature selection used to reduce the computation and complexity lead to speed up the system
... Show MoreABSTRACT: BACKGROUND: Left ventricular hypertrophy is a significant risk factor for cardiovascular complications such as ischemic heart disease, heart failure, sudden death, atrial fibrillation, and stroke. A proper non-expensive tool is required for detection of this pathology. Different electrocardiographic (ECG) criteria were investigated; however, the results were conflicting regarding the accuracy of these criteria. OBJECTIVE: To assess the accuracy of three electrocardiographic criteria in diagnosis of left ventricular hypertrophy in adult patients with hypertension using echocardiography as a reference test. PATIENTS AND METHODS: This is a hospital-based cross sectional observational study which included 340 adult patients with a his
... Show MoreThis paper presents a method to classify colored textural images of skin tissues. Since medical images havehighly heterogeneity, the development of reliable skin-cancer detection process is difficult, and a mono fractaldimension is not sufficient to classify images of this nature. A multifractal-based feature vectors are suggested hereas an alternative and more effective tool. At the same time multiple color channels are used to get more descriptivefeatures.Two multifractal based set of features are suggested here. The first set measures the local roughness property, whilethe second set measure the local contrast property.A combination of all the extracted features from the three colormodels gives a highest classification accuracy with 99.4
... Show MoreThis research includes the study of dual data models with mixed random parameters, which contain two types of parameters, the first is random and the other is fixed. For the random parameter, it is obtained as a result of differences in the marginal tendencies of the cross sections, and for the fixed parameter, it is obtained as a result of differences in fixed limits, and random errors for each section. Accidental bearing the characteristic of heterogeneity of variance in addition to the presence of serial correlation of the first degree, and the main objective in this research is the use of efficient methods commensurate with the paired data in the case of small samples, and to achieve this goal, the feasible general least squa
... Show MoreBackground: For many decades, the ECG was the
workhorse of non-invasive cardiac test and today although
other techniques provide more details about the structural
anomalies in congenital heart diseases, ECG is likely to be
part of clinical evaluation of patients with such diseases
because it is inexpensive, easy to perform and in certain
situations may be both sensitive and specific.
Objective: this study carried out to identify the pattern of
ECG study in patients with TOF.
Methods: this is a retrospective study of 200 patients
with TOF, referred to Ibn Al-Bitar cardiac center from
April 1993 to May 1999. The diagnosis of TOF established
by echocrdiographic, catheterization and angiographic
study.