Malicious software (malware) performs a malicious function that compromising a computer system’s security. Many methods have been developed to improve the security of the computer system resources, among them the use of firewall, encryption, and Intrusion Detection System (IDS). IDS can detect newly unrecognized attack attempt and raising an early alarm to inform the system about this suspicious intrusion attempt. This paper proposed a hybrid IDS for detection intrusion, especially malware, with considering network packet and host features. The hybrid IDS designed using Data Mining (DM) classification methods that for its ability to detect new, previously unseen intrusions accurately and automatically. It uses both anomaly and misuse detection techniques using two DM classifiers (Interactive Dichotomizer 3 (ID3) classifier and Naïve Bayesian (NB) Classifier) to verify the validity of the proposed system in term of accuracy rate. A proposed HybD dataset used in training and testing the hybrid IDS. Feature selection is used to consider the intrinsic features in classification decision, this accomplished by using three different measures: Association rules (AR) method, ReliefF measure, and Gain Ratio (GR) measure. NB classifier with AR method given the most accurate classification results (99%) with false positive (FP) rate (0%) and false negative (FN) rate (1%).
Within the framework of big data, energy issues are highly significant. Despite the significance of energy, theoretical studies focusing primarily on the issue of energy within big data analytics in relation to computational intelligent algorithms are scarce. The purpose of this study is to explore the theoretical aspects of energy issues in big data analytics in relation to computational intelligent algorithms since this is critical in exploring the emperica aspects of big data. In this chapter, we present a theoretical study of energy issues related to applications of computational intelligent algorithms in big data analytics. This work highlights that big data analytics using computational intelligent algorithms generates a very high amo
... Show More— In light of the pandemic that has swept the world, the use of e-learning in educational institutions has become an urgent necessity for continued knowledge communication with students. Educational institutions can benefit from the free tools that Google provide and from these applications, Google classroom which is characterized by ease of use, but the efficiency of using Google classroom is affected by several variables not studied in previous studies Clearly, this study aimed to identify the use of Google classroom as a system for managing e-learning and the factors affecting the performance of students and lecturer. The data of this study were collected from 219 members of the faculty and students at the College of Administra
... Show More: The need for means of transmitting data in a confidential and secure manner has become one of the most important subjects in the world of communications. Therefore, the search began for what would achieve not only the confidentiality of information sent through means of communication, but also high speed of transmission and minimal energy consumption, Thus, the encryption technology using DNA was developed which fulfills all these requirements [1]. The system proposes to achieve high protection of data sent over the Internet by applying the following objectives: 1. The message is encrypted using one of the DNA methods with a key generated by the Diffie-Hellman Ephemeral algorithm, part of this key is secret and this makes the pro
... Show More<span lang="EN-US">Diabetes is one of the deadliest diseases in the world that can lead to stroke, blindness, organ failure, and amputation of lower limbs. Researches state that diabetes can be controlled if it is detected at an early stage. Scientists are becoming more interested in classification algorithms in diagnosing diseases. In this study, we have analyzed the performance of five classification algorithms namely naïve Bayes, support vector machine, multi layer perceptron artificial neural network, decision tree, and random forest using diabetes dataset that contains the information of 2000 female patients. Various metrics were applied in evaluating the performance of the classifiers such as precision, area under the c
... Show MoreActive worms have posed a major security threat to the Internet, and many research efforts have focused on them. This paper is interested in internet worm that spreads via TCP, which accounts for the majority of internet traffic. It presents an approach that use a hybrid solution between two detection algorithms: behavior base detection and signature base detection to have the features of each of them. The aim of this study is to have a good solution of detecting worm and stealthy worm with the feature of the speed. This proposal was designed in distributed collaborative scheme based on the small-world network model to effectively improve the system performance.
The relationships between the related parties constitute a normal feature of trading and business processes. Entities may perform parts of their activities through subsidiary entities, joint ventures and associate entities. In these cases, the entity has the ability to influence the financial and operating policies of the investee through control, joint control or significant influence, So could affect established knowledge of transactions and balances outstanding, including commitments, and relationships with related to the evaluation of its operations by users of financial statements, including the risks and opportunities facing the entity assess the parties. So research has gained importance of the importance of the availability
... Show MoreObjective This research investigates Breast Cancer real data for Iraqi women, these data are acquired manually from several Iraqi Hospitals of early detection for Breast Cancer. Data mining techniques are used to discover the hidden knowledge, unexpected patterns, and new rules from the dataset, which implies a large number of attributes. Methods Data mining techniques manipulate the redundant or simply irrelevant attributes to discover interesting patterns. However, the dataset is processed via Weka (The Waikato Environment for Knowledge Analysis) platform. The OneR technique is used as a machine learning classifier to evaluate the attribute worthy according to the class value. Results The evaluation is performed using
... Show MoreTraditionally, path selection within routing is formulated as a shortest path optimization problem. The objective function for optimization could be any one variety of parameters such as number of hops, delay, cost...etc. The problem of least cost delay constraint routing is studied in this paper since delay constraint is very common requirement of many multimedia applications and cost minimization captures the need to
distribute the network. So an iterative algorithm is proposed in this paper to solve this problem. It is appeared from the results of applying this algorithm that it gave the optimal path (optimal solution) from among multiple feasible paths (feasible solutions).
Big data analysis has important applications in many areas such as sensor networks and connected healthcare. High volume and velocity of big data bring many challenges to data analysis. One possible solution is to summarize the data and provides a manageable data structure to hold a scalable summarization of data for efficient and effective analysis. This research extends our previous work on developing an effective technique to create, organize, access, and maintain summarization of big data and develops algorithms for Bayes classification and entropy discretization of large data sets using the multi-resolution data summarization structure. Bayes classification and data discretization play essential roles in many learning algorithms such a
... Show MoreAlthough the number of stomach tumor patients reduced obviously during last decades in western countries, but this illness is still one of the main causes of death in developing countries. The aim of this research is to detect the area of a tumor in a stomach images based on fuzzy clustering. The proposed methodology consists of three stages. The stomach images are divided into four quarters and then features elicited from each quarter in the first stage by utilizing seven moments invariant. Fuzzy C-Mean clustering (FCM) was employed in the second stage for each quarter to collect the features of each quarter into clusters. Manhattan distance was calculated in the third stage among all clusters' centers in all quarters to disclosure of t
... Show More