Alzheimer's disease (AD) increasingly affects the elderly and is a major killer of those 65 and over. Different deep-learning methods are used for automatic diagnosis, yet they have some limitations. Deep Learning is one of the modern methods that were used to detect and classify a medical image because of the ability of deep Learning to extract the features of images automatically. However, there are still limitations to using deep learning to accurately classify medical images because extracting the fine edges of medical images is sometimes considered difficult, and some distortion in the images. Therefore, this research aims to develop A Computer-Aided Brain Diagnosis (CABD) system that can tell if a brain scan exhibits indications of Alzheimer's disease. The system employs MRI and feature extraction methods to categorize images. This paper adopts the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset includes functional MRI and Positron-Version Tomography scans for Alzheimer's patient identification, which were produced for people with Alzheimer's as well as typical individuals. The proposed technique uses MRI brain scans to discover and categorize traits utilizing the Histogram Features Extraction (HFE) technique to be combined with the Canny edge to representing the input image of the Convolutional Neural Networks (CNN) classification. This strategy keeps track of their instances of gradient orientation in an image. The experimental result provided an accuracy of 97.7% for classifying ADNI images.
For several applications, it is very important to have an edge detection technique matching human visual contour perception and less sensitive to noise. The edge detection algorithm describes in this paper based on the results obtained by Maximum a posteriori (MAP) and Maximum Entropy (ME) deblurring algorithms. The technique makes a trade-off between sharpening and smoothing the noisy image. One of the advantages of the described algorithm is less sensitive to noise than that given by Marr and Geuen techniques that considered to be the best edge detection algorithms in terms of matching human visual contour perception.
A new approach presented in this study to determine the optimal edge detection threshold value. This approach is base on extracting small homogenous blocks from unequal mean targets. Then, from these blocks we generate small image with known edges (edges represent the lines between the contacted blocks). So, these simulated edges can be assumed as true edges .The true simulated edges, compared with the detected edges in the small generated image is done by using different thresholding values. The comparison based on computing mean square errors between the simulated edge image and the produced edge image from edge detector methods. The mean square error computed for the total edge image (Er), for edge regio
... Show Moret:
The most famous thing a person does is talk. He loves and hates, and continues with it confirming relationships, and with it, too, comes out of disbelief into faith. Marry a word and separate with a word. He reaches the top of the heavens with a kind word, with which he will gain the pleasure of God, and the Lord of a word that the servant speaks to which God writes with our pleasure or throws him on his face in the fire. Emotions are inflamed, the United Nations is intensified with a word, and relations between states and war continue with a word.
What comes out of a person’s mouth is a translator that expresses the repository of his conscience and reveals the place of his bed, for it is evidence of
... Show MoreSome degree of noise is always present in any electronic device that
transmits or receives a signal . For televisions, this signal i has been to s the
broadcast data transmitted over cable-or received at the antenna; for digital
cameras, the signal is the light which hits the camera sensor. At any case, noise
is unavoidable. In this paper, an electronic noise has been generate on
TV-satellite images by using variable resistors connected to the transmitting cable
. The contrast of edges has been determined. This method has been applied by
capturing images from TV-satellite images (Al-arabiya channel) channel with
different resistors. The results show that when increasing resistance always
produced higher noise f
The growth of developments in machine learning, the image processing methods along with availability of the medical imaging data are taking a big increase in the utilization of machine learning strategies in the medical area. The utilization of neural networks, mainly, in recent days, the convolutional neural networks (CNN), have powerful descriptors for computer added diagnosis systems. Even so, there are several issues when work with medical images in which many of medical images possess a low-quality noise-to-signal (NSR) ratio compared to scenes obtained with a digital camera, that generally qualified a confusingly low spatial resolution and tends to make the contrast between different tissues of body are very low and it difficult to co
... Show MoreFinding similarities in texts is important in many areas such as information retrieval, automated article scoring, and short answer categorization. Evaluating short answers is not an easy task due to differences in natural language. Methods for calculating the similarity between texts depend on semantic or grammatical aspects. This paper discusses a method for evaluating short answers using semantic networks to represent the typical (correct) answer and students' answers. The semantic network of nodes and relationships represents the text (answers). Moreover, grammatical aspects are found by measuring the similarity of parts of speech between the answers. In addition, finding hierarchical relationships between nodes in netwo
... Show MorePredicting the network traffic of web pages is one of the areas that has increased focus in recent years. Modeling traffic helps find strategies for distributing network loads, identifying user behaviors and malicious traffic, and predicting future trends. Many statistical and intelligent methods have been studied to predict web traffic using time series of network traffic. In this paper, the use of machine learning algorithms to model Wikipedia traffic using Google's time series dataset is studied. Two data sets were used for time series, data generalization, building a set of machine learning models (XGboost, Logistic Regression, Linear Regression, and Random Forest), and comparing the performance of the models using (SMAPE) and
... Show MoreThe development of microcontroller is used in monitoring and data acquisition recently. This development has born various architectures for spreading and interfacing the microcontroller in network environment. Some of existing architecture suffers from redundant in resources, extra processing, high cost and delay in response. This paper presents flexible concise architecture for building distributed microcontroller networked system. The system consists of only one server, works through the internet, and a set of microcontrollers distributed in different sites. Each microcontroller is connected through the Ethernet to the internet. In this system the client requesting data from certain side is accomplished through just one server that is in
... Show MoreIn this paper an authentication based finger print biometric system is proposed with personal identity information of name and birthday. A generation of National Identification Number (NIDN) is proposed in merging of finger print features and the personal identity information to generate the Quick Response code (QR) image that used in access system. In this paper two approaches are dependent, traditional authentication and strong identification with QR and NIDN information. The system shows accuracy of 96.153% with threshold value of 50. The accuracy reaches to 100% when the threshold value goes under 50.
is at an all-time high in the modern period, and the majority of the population uses the Internet for all types of communication. It is great to be able to improvise like this. As a result of this trend, hackers have become increasingly focused on attacking the system/network in numerous ways. When a hacker commits a digital crime, it is examined in a reactive manner, which aids in the identification of the perpetrators. However, in the modern period, it is not expected to wait for an attack to occur. The user anticipates being able to predict a cyberattack before it causes damage to the system. This can be accomplished with the assistance of the proactive forensic framework presented in this study. The proposed system combines
... Show More