The Enhanced Thematic Mapper Plus (ETM+) that loaded onboard the Landsat-7 satellite was launched on 15 April 1999. After 4 years, the image collected by this sensor was greatly impacted by the failure of the system’s Scan Line Corrector (SLC), a radiometry error.The median filter is one of the basic building blocks in many image processing situations. Digital images are often distorted by impulse noise due to errors generated by the noise sensor, errors that occur during the conversion of signals from analog-to-digital, as well as errors generated in communication channels. This error inevitably leads to a change in the intensity of some pixels, while some pixels remain unchanged. To remove impulse noise and improve the quality of the image we are working on. In this paper, the Landsat -7 data was corrected from line droop out radiometric errors using the median filter method. we studied the median filter and offer a method based on an improved median filtering algorithm, [2]. We apply the median filter (3 x 3) to correct the image taken by of Landsat 7 and correct it, and we will restore the damaged pixels using the Erdas imagine program.
Agriculture improvement is a national economic issue that extremely depends on productivity. The explanation of disease detection in plants plays a significant role in the agriculture field. Accurate prediction of the plant disease can help treat the leaf as early as possible, which controls the economic loss. This paper aims to use the Image processing techniques with Convolutional Neural Network (CNN). It is one of the deep learning techniques to classify and detect plant leaf diseases. A publicly available Plant village dataset was used, which consists of 15 classes, including 12 diseases classes and 3 healthy classes. The data augmentation techniques have been used. In addition to dropout and weight reg
... Show MoreSecure data communication across networks is always threatened with intrusion and abuse. Network Intrusion Detection System (IDS) is a valuable tool for in-depth defense of computer networks. Most research and applications in the field of intrusion detection systems was built based on analysing the several datasets that contain the attacks types using the classification of batch learning machine. The present study presents the intrusion detection system based on Data Stream Classification. Several data stream algorithms were applied on CICIDS2017 datasets which contain several new types of attacks. The results were evaluated to choose the best algorithm that satisfies high accuracy and low computation time.
Document analysis of images snapped by camera is a growing challenge. These photos are often poor-quality compound images, composed of various objects and text; this makes automatic analysis complicated. OCR is one of the image processing techniques which is used to perform automatic identification of texts. Existing image processing techniques need to manage many parameters in order to clearly recognize the text in such pictures. Segmentation is regarded one of these essential parameters. This paper discusses the accuracy of segmentation process and its effect over the recognition process. According to the proposed method, the images were firstly filtered using the wiener filter then the active contour algorithm could b
... Show MoreFraud Includes acts involving the exercise of deception by multiple parties inside and outside companies in order to obtain economic benefits against the harm to those companies, as they are to commit fraud upon the availability of three factors which represented by the existence of opportunities, motivation, and rationalization. Fraud detecting require necessity of indications the possibility of its existence. Here, Benford’s law can play an important role in direct the light towards the possibility of the existence of financial fraud in the accounting records of the company, which provides the required effort and time for detect fraud and prevent it.
This work addressed the assignment problem (AP) based on fuzzy costs, where the objective, in this study, is to minimize the cost. A triangular, or trapezoidal, fuzzy numbers were assigned for each fuzzy cost. In addition, the assignment models were applied on linguistic variables which were initially converted to quantitative fuzzy data by using the Yager’sorankingi method. The paper results have showed that the quantitative date have a considerable effect when considered in fuzzy-mathematic models.
Keywords provide the reader with a summary of the contents of the document and play a significant role in information retrieval systems, especially in search engine optimization and bibliographic databases. Furthermore keywords help to classify the document into the related topic. Keywords extraction included manual extracting depends on the content of the document or article and the judgment of its author. Manual extracting of keywords is costly, consumes effort and time, and error probability. In this research an automatic Arabic keywords extraction model based on deep learning algorithms is proposed. The model consists of three main steps: preprocessing, feature extraction and classification to classify the document
... Show MoreVisual media is a better way to deliver the information than the old way of "reading". For that reason with the wide propagation of multimedia websites, there are large video library’s archives, which came to be a main resource for humans. This research puts its eyes on the existing development in applying classical phrase search methods to a linked vocal transcript and after that it retrieves the video, this an easier way to search any visual media. This system has been implemented using JSP and Java language for searching the speech in the videos