With the rapid development of computers and network technologies, the security of information in the internet becomes compromise and many threats may affect the integrity of such information. Many researches are focused theirs works on providing solution to this threat. Machine learning and data mining are widely used in anomaly-detection schemes to decide whether or not a malicious activity is taking place on a network. In this paper a hierarchical classification for anomaly based intrusion detection system is proposed. Two levels of features selection and classification are used. In the first level, the global feature vector for detection the basic attacks (DoS, U2R, R2L and Probe) is selected. In the second level, four local feature vectors to determine the sub-class of each attack type are selected. Features are evaluated to measure its discrimination ability among classes. K-Means clustering algorithm is then used to cluster each class into two clusters. SFFS and ANN are used in hierarchical basis to select the relevant features and classify the query behavior to proper intrusion type. Experimental evaluation on NSL-KDD, a filtered version of the original KDD99 has shown that the proposed IDS can achieve good performance in terms of intrusions detection and recognition.
Big data analysis has important applications in many areas such as sensor networks and connected healthcare. High volume and velocity of big data bring many challenges to data analysis. One possible solution is to summarize the data and provides a manageable data structure to hold a scalable summarization of data for efficient and effective analysis. This research extends our previous work on developing an effective technique to create, organize, access, and maintain summarization of big data and develops algorithms for Bayes classification and entropy discretization of large data sets using the multi-resolution data summarization structure. Bayes classification and data discretization play essential roles in many learning algorithms such a
... Show MoreThe deep learning algorithm has recently achieved a lot of success, especially in the field of computer vision. This research aims to describe the classification method applied to the dataset of multiple types of images (Synthetic Aperture Radar (SAR) images and non-SAR images). In such a classification, transfer learning was used followed by fine-tuning methods. Besides, pre-trained architectures were used on the known image database ImageNet. The model VGG16 was indeed used as a feature extractor and a new classifier was trained based on extracted features.The input data mainly focused on the dataset consist of five classes including the SAR images class (houses) and the non-SAR images classes (Cats, Dogs, Horses, and Humans). The Conv
... Show MoreCorrect grading of apple slices can help ensure quality and improve the marketability of the final product, which can impact the overall development of the apple slice industry post-harvest. The study intends to employ the convolutional neural network (CNN) architectures of ResNet-18 and DenseNet-201 and classical machine learning (ML) classifiers such as Wide Neural Networks (WNN), Naïve Bayes (NB), and two kernels of support vector machines (SVM) to classify apple slices into different hardness classes based on their RGB values. Our research data showed that the DenseNet-201 features classified by the SVM-Cubic kernel had the highest accuracy and lowest standard deviation (SD) among all the methods we tested, at 89.51 % 1.66 %. This
... Show Moreteen sites Baghdad are made. The sites are divided into two groups, one in Karkh and the other in Rusafa. Assessing the underground conditions can be occurred by drilling vertical holes called exploratory boring into the ground, obtaining soil (disturbed and undisturbed) samples, and testing these samples in a laboratory (civil engineering laboratory /University of Baghdad). From disturbed, the tests involved the grain size analysis and then classified the soil, Atterberg limit, chemical test (organic content, sulphate content, gypsum content and chloride content). From undisturbed samples, the test involved the consolidation test (from this test, the following parameters can be obtained: initial void ratio eo, compression index cc, swel
... Show MoreSocial Networking has dominated the whole world by providing a platform of information dissemination. Usually people share information without knowing its truthfulness. Nowadays Social Networks are used for gaining influence in many fields like in elections, advertisements etc. It is not surprising that social media has become a weapon for manipulating sentiments by spreading disinformation. Propaganda is one of the systematic and deliberate attempts used for influencing people for the political, religious gains. In this research paper, efforts were made to classify Propagandist text from Non-Propagandist text using supervised machine learning algorithms. Data was collected from the news sources from July 2018-August 2018. After annota
... Show MoreThrough this research, We have tried to evaluate the health programs and their effectiveness in improving the health situation through a study of the health institutions reality in Baghdad to identify the main reasons that affect the increase in maternal mortality by using two regression models, "Poisson's Regression Model" and "Hierarchical Poisson's Regression Model". And the study of that indicator (deaths) was through a comparison between the estimation methods of the used models. The "Maximum Likelihood" method was used to estimate the "Poisson's Regression Model"; whereas the "Full Maximum Likelihood" method were used for the "Hierarchical Poisson's Regression Model
... Show MoreEye Detection is used in many applications like pattern recognition, biometric, surveillance system and many other systems. In this paper, a new method is presented to detect and extract the overall shape of one eye from image depending on two principles Helmholtz & Gestalt. According to the principle of perception by Helmholz, any observed geometric shape is perceptually "meaningful" if its repetition number is very small in image with random distribution. To achieve this goal, Gestalt Principle states that humans see things either through grouping its similar elements or recognize patterns. In general, according to Gestalt Principle, humans see things through genera
... Show MoreThis paper proposes a collaborative system called Recycle Rewarding System (RRS), and focuses on the aspect of using information communication technology (ICT) as a tool to promote greening. The idea behind RRS is to encourage recycling collectors by paying them for earning points. In doing so, both the industries and individuals reap the economical benefits of such system. Finally, and more importantly, the system intends to achieve a green environment for the Earth. This paper discusses the design and implementation of the RRS, involves: the architectural design, selection of components, and implementation issues. Five modules are used to construct the system, namely: database, data entry, points collecting and recording, points reward
... Show MoreBig data analysis is essential for modern applications in areas such as healthcare, assistive technology, intelligent transportation, environment and climate monitoring. Traditional algorithms in data mining and machine learning do not scale well with data size. Mining and learning from big data need time and memory efficient techniques, albeit the cost of possible loss in accuracy. We have developed a data aggregation structure to summarize data with large number of instances and data generated from multiple data sources. Data are aggregated at multiple resolutions and resolution provides a trade-off between efficiency and accuracy. The structure is built once, updated incrementally, and serves as a common data input for multiple mining an
... Show More