With the rapid development of computers and network technologies, the security of information in the internet becomes compromise and many threats may affect the integrity of such information. Many researches are focused theirs works on providing solution to this threat. Machine learning and data mining are widely used in anomaly-detection schemes to decide whether or not a malicious activity is taking place on a network. In this paper a hierarchical classification for anomaly based intrusion detection system is proposed. Two levels of features selection and classification are used. In the first level, the global feature vector for detection the basic attacks (DoS, U2R, R2L and Probe) is selected. In the second level, four local feature vectors to determine the sub-class of each attack type are selected. Features are evaluated to measure its discrimination ability among classes. K-Means clustering algorithm is then used to cluster each class into two clusters. SFFS and ANN are used in hierarchical basis to select the relevant features and classify the query behavior to proper intrusion type. Experimental evaluation on NSL-KDD, a filtered version of the original KDD99 has shown that the proposed IDS can achieve good performance in terms of intrusions detection and recognition.
A substantial matter to confidential messages' interchange through the internet is transmission of information safely. For example, digital products' consumers and producers are keen for knowing those products are genuine and must be distinguished from worthless products. Encryption's science can be defined as the technique to embed the data in an images file, audio or videos in a style which should be met the safety requirements. Steganography is a portion of data concealment science that aiming to be reached a coveted security scale in the interchange of private not clear commercial and military data. This research offers a novel technique for steganography based on hiding data inside the clusters that resulted from fuzzy clustering. T
... Show Moreيعد التقطيع الصوري من الاهداف الرئيسة والضرورية في المعالجات الصورية للصور الرقمية، فهو يسعى الى تجزئة الصور المدروسة الى مناطق متعددة اكثر نفعاً تلخص فيها المناطق ذات الافادة لصور الاقمار الصناعية، وهي صور متعددة الاطياف ومجهزة من الاقمار الصناعية باستخدام مبدأ الاستشعار عن بعد والذي اصبح من المفاهيم المهمة التي تُعتمد تطبيقاته في اغلب ضروريات الحياة اليومية، وخاصة بعد التطورات المتسارعة التي شهد
... Show MoreThe problem of rapid population growth is one of the main problems effecting countries of the world the reason for this the growth in different environment areas of life commercial, industrial, social, food and educational. Therefore, this study was conducted on the amount of potable water consumed using two models of the two satellite and aerial images of the Kadhimiya District-block 427 and Al-Shu,laa district-block 450 in Baghdad city for available years in the Secretariat of Baghdad (2005, 2011,2013,2015). Through the characteristics of geographic information systems, which revealed the spatial patterns of urban creep by determining the role and buildings to be created, which appear in the picture for the
... Show MoreAutism is a lifelong developmental deficit that affects how people perceive the world and interact with each others. An estimated one in more than 100 people has autism. Autism affects almost four times as many boys than girls. The commonly used tools for analyzing the dataset of autism are FMRI, EEG, and more recently "eye tracking". A preliminary study on eye tracking trajectories of patients studied, showed a rudimentary statistical analysis (principal component analysis) provides interesting results on the statistical parameters that are studied such as the time spent in a region of interest. Another study, involving tools from Euclidean geometry and non-Euclidean, the trajectory of eye patients also showed interesting results. In this
... Show MoreNowadays, people's expression on the Internet is no longer limited to text, especially with the rise of the short video boom, leading to the emergence of a large number of modal data such as text, pictures, audio, and video. Compared to single mode data ,the multi-modal data always contains massive information. The mining process of multi-modal information can help computers to better understand human emotional characteristics. However, because the multi-modal data show obvious dynamic time series features, it is necessary to solve the dynamic correlation problem within a single mode and between different modes in the same application scene during the fusion process. To solve this problem, in this paper, a feature extraction framework of
... Show MoreThis work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show MoreDeep learning convolution neural network has been widely used to recognize or classify voice. Various techniques have been used together with convolution neural network to prepare voice data before the training process in developing the classification model. However, not all model can produce good classification accuracy as there are many types of voice or speech. Classification of Arabic alphabet pronunciation is a one of the types of voice and accurate pronunciation is required in the learning of the Qur’an reading. Thus, the technique to process the pronunciation and training of the processed data requires specific approach. To overcome this issue, a method based on padding and deep learning convolution neural network is proposed to
... Show MoreIn this paper, a handwritten digit classification system is proposed based on the Discrete Wavelet Transform and Spike Neural Network. The system consists of three stages. The first stage is for preprocessing the data and the second stage is for feature extraction, which is based on Discrete Wavelet Transform (DWT). The third stage is for classification and is based on a Spiking Neural Network (SNN). To evaluate the system, two standard databases are used: the MADBase database and the MNIST database. The proposed system achieved a high classification accuracy rate with 99.1% for the MADBase database and 99.9% for the MNIST database