Alzheimer's disease (AD) increasingly affects the elderly and is a major killer of those 65 and over. Different deep-learning methods are used for automatic diagnosis, yet they have some limitations. Deep Learning is one of the modern methods that were used to detect and classify a medical image because of the ability of deep Learning to extract the features of images automatically. However, there are still limitations to using deep learning to accurately classify medical images because extracting the fine edges of medical images is sometimes considered difficult, and some distortion in the images. Therefore, this research aims to develop A Computer-Aided Brain Diagnosis (CABD) system that can tell if a brain scan exhibits indications of Alzheimer's disease. The system employs MRI and feature extraction methods to categorize images. This paper adopts the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset includes functional MRI and Positron-Version Tomography scans for Alzheimer's patient identification, which were produced for people with Alzheimer's as well as typical individuals. The proposed technique uses MRI brain scans to discover and categorize traits utilizing the Histogram Features Extraction (HFE) technique to be combined with the Canny edge to representing the input image of the Convolutional Neural Networks (CNN) classification. This strategy keeps track of their instances of gradient orientation in an image. The experimental result provided an accuracy of 97.7% for classifying ADNI images.
Document clustering is the process of organizing a particular electronic corpus of documents into subgroups of similar text features. Formerly, a number of conventional algorithms had been applied to perform document clustering. There are current endeavors to enhance clustering performance by employing evolutionary algorithms. Thus, such endeavors became an emerging topic gaining more attention in recent years. The aim of this paper is to present an up-to-date and self-contained review fully devoted to document clustering via evolutionary algorithms. It firstly provides a comprehensive inspection to the document clustering model revealing its various components with its related concepts. Then it shows and analyzes the principle research wor
... Show MoreArabic text categorization for pattern recognitions is challenging. We propose for the first time a novel holistic method based on clustering for classifying Arabic writer. The categorization is accomplished stage-wise. Firstly, these document images are sectioned into lines, words, and characters. Secondly, their structural and statistical features are obtained from sectioned portions. Thirdly, F-Measure is used to evaluate the performance of the extracted features and their combination in different linkage methods for each distance measures and different numbers of groups. Finally, experiments are conducted on the standard KHATT dataset of Arabic handwritten text comprised of varying samples from 1000 writers. The results in the generatio
... Show MoreThe quality of Global Navigation Satellite Systems (GNSS) networks are considerably influenced by the configuration of the observed baselines. Where, this study aims to find an optimal configuration for GNSS baselines in terms of the number and distribution of baselines to improve the quality criteria of the GNSS networks. First order design problem (FOD) was applied in this research to optimize GNSS network baselines configuration, and based on sequential adjustment method to solve its objective functions.
FOD for optimum precision (FOD-p) was the proposed model which based on the design criteria of A-optimality and E-optimality. These design criteria were selected as objective functions of precision, whic
... Show MoreIn this paper, we implement and examine a Simulink model with electroencephalography (EEG) to control many actuators based on brain waves. This will be in great demand since it will be useful for certain individuals who are unable to access some control units that need direct contact with humans. In the beginning, ten volunteers of a wide range of (20-66) participated in this study, and the statistical measurements were first calculated for all eight channels. Then the number of channels was reduced by half according to the activation of brain regions within the utilized protocol and the processing time also decreased. Consequently, four of the participants (three males and one female) were chosen to examine the Simulink model duri
... Show MoreToday with increase using social media, a lot of researchers have interested in topic extraction from Twitter. Twitter is an unstructured short text and messy that it is critical to find topics from tweets. While topic modeling algorithms such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) are originally designed to derive topics from large documents such as articles, and books. They are often less efficient when applied to short text content like Twitter. Luckily, Twitter has many features that represent the interaction between users. Tweets have rich user-generated hashtags as keywords. In this paper, we exploit the hashtags feature to improve topics learned
Speech is the essential way to interact between humans or between human and machine. However, it is always contaminated with different types of environment noise. Therefore, speech enhancement algorithms (SEA) have appeared as a significant approach in speech processing filed to suppress background noise and return back the original speech signal. In this paper, a new efficient two-stage SEA with low distortion is proposed based on minimum mean square error sense. The estimation of clean signal is performed by taking the advantages of Laplacian speech and noise modeling based on orthogonal transform (Discrete Krawtchouk-Tchebichef transform) coefficients distribution. The Discrete Kra
Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the origin
... Show MoreMetal contents in vegetables are interesting because of issues related to food safety and potential health risks. The availability of these metals in the human body may perform many biochemical functions and some of them linked with various diseases at high levels. The current study aimed to evaluate the concentration of various metals in common local consumed vegetables using ICP-MS. The concentrations of metals in vegetables of tarragon, Bay laurel, dill, Syrian mesquite, vine leaves, thymes, arugula, basil, common purslane and parsley of this study were found to be in the range of, 76-778 for Al, 10-333 for B, 4-119 for Ba, 2812-24645 for Ca, 0.1-0.32 for Co, 201-464 for Fe, 3661-46400 for K, 0.31–1.
... Show More