Preferred Language
Articles
/
ijs-2984
A Decision Tree-Aware Genetic Algorithm for Botnet Detection
...Show More Authors

     In this paper, the botnet detection problem is defined as a feature selection problem and the genetic algorithm (GA) is used to search for the best significant combination of features from the entire search space of set of features. Furthermore, the Decision Tree (DT) classifier is used as an objective function to direct the ability of the proposed GA to locate the combination of features that can correctly classify the activities into normal traffics and botnet attacks. Two datasets  namely the UNSW-NB15 and the Canadian Institute for Cybersecurity Intrusion Detection System 2017 (CICIDS2017), are used as evaluation datasets. The results reveal that the proposed DT-aware GA can effectively find the relevant features from the whole features set. Thus, it obtains efficient botnet detection results in terms of F-score, precision, detection rate, and  number of relevant features, when compared with DT alone.

Scopus Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Mon Mar 11 2019
Journal Name
Baghdad Science Journal
Solving Mixed Volterra - Fredholm Integral Equation (MVFIE) by Designing Neural Network
...Show More Authors

       In this paper, we focus on designing feed forward neural network (FFNN) for solving Mixed Volterra – Fredholm Integral Equations (MVFIEs) of second kind in 2–dimensions. in our method, we present a multi – layers model consisting of a hidden layer which has five hidden units (neurons) and one linear output unit. Transfer function (Log – sigmoid) and training algorithm (Levenberg – Marquardt) are used as a sigmoid activation of each unit. A comparison between the results of numerical experiment and the analytic solution of some examples has been carried out in order to justify the efficiency and the accuracy of our method.

         

... Show More
View Publication Preview PDF
Scopus (2)
Scopus Clarivate Crossref
Publication Date
Thu Mar 30 2023
Journal Name
Iraqi Journal Of Science
Low Energy Consumption Scheme Based on PEGASIS Protocol in WSNs
...Show More Authors

Wireless Sensor Networks (WSNs) are composed of a collection of rechargeable sensor nodes. Typically, sensor nodes collect and deliver the necessary data in response to a user’s specific request in many application areas such as health, military and domestic purposes. Applying routing protocols for sensor nodes can prolong the lifetime of the network. Power Efficient GAthering in Sensor Information System (PEGASIS) protocol is developed as a chain based protocol that uses a greedy algorithm in selecting one of the nodes as a head node to transmit the data to the base station. The proposed scheme Multi-cluster Power Efficient GAthering in Sensor Information System (MPEGASIS) is developed based on PEGASIS routing protocol in WSN. The aim

... Show More
View Publication Preview PDF
Publication Date
Mon Dec 31 2012
Journal Name
Al-khwarizmi Engineering Journal
Field Programmable Gate Array (FPGA) Model of Intelligent Traffic Light System with Saving Power
...Show More Authors

In this paper, a FPGA model of intelligent traffic light system with power saving was built. The intelligent traffic light system consists of sensors placed on the side's ends of the intersection to sense the presence or absence of vehicles. This system reduces the waiting time when the traffic light is red, through the transition from traffic light state to the other state, when the first state spends a lot of time, because there are no more vehicles. The proposed system is built using VHDL, simulated using Xilinx ISE 9.2i package, and implemented using Spartan-3A XC3S700A FPGA kit. Implementation and Simulation behavioral model results show that the proposed intelligent traffic light system model satisfies the specified operational req

... Show More
View Publication Preview PDF
Publication Date
Sun Jun 05 2011
Journal Name
Baghdad Science Journal
Applying Quran Security and Hamming CodesFor Preventing of Text Modification
...Show More Authors

The widespread of internet allover the world, in addition to the increasing of the huge number of users that they exchanged important information over it highlights the need for a new methods to protect these important information from intruders' corruption or modification. This paper suggests a new method that ensures that the texts of a given document cannot be modified by the intruders. This method mainly consists of mixture of three steps. The first step which barrows some concepts of "Quran" security system to detect some type of change(s) occur in a given text. Where a key of each paragraph in the text is extracted from a group of letters in that paragraph which occur as multiply of a given prime number. This step cannot detect the ch

... Show More
View Publication Preview PDF
Crossref
Publication Date
Wed Jun 01 2011
Journal Name
Journal Of Economics And Administrative Sciences
Selection of the initial value of the time series generating the first-order self-regression model in simulation modeAnd their impact on the accuracy of the model
...Show More Authors

In this paper, compared eight methods for generating the initial value and the impact of these methods to estimate the parameter of a autoregressive model, as was the use of three of the most popular methods to estimate the model and the most commonly used by researchers MLL method, Barg method  and the least squares method and that using the method of simulation model  first order autoregressive through the design of a number of simulation experiments and the different sizes of the samples.

                  

View Publication Preview PDF
Crossref
Publication Date
Tue Aug 10 2021
Journal Name
Design Engineering
Lossy Image Compression Using Hybrid Deep Learning Autoencoder Based On kmean Clusteri
...Show More Authors

Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eye

... Show More
Publication Date
Thu Nov 29 2018
Journal Name
Iraqi Journal Of Science
Improving Extractive Multi-Document Text Summarization Through Multi-Objective Optimization
...Show More Authors

Multi-document summarization is an optimization problem demanding optimization of more than one objective function simultaneously. The proposed work regards balancing of the two significant objectives: content coverage and diversity when generating summaries from a collection of text documents.

     Any automatic text summarization system has the challenge of producing high quality summary. Despite the existing efforts on designing and evaluating the performance of many text summarization techniques, their formulations lack the introduction of any model that can give an explicit representation of – coverage and diversity – the two contradictory semantics of any summary. In this work, the design of

... Show More
View Publication Preview PDF
Publication Date
Sat Jan 01 2022
Journal Name
Turkish Journal Of Physiotherapy And Rehabilitation
classification coco dataset using machine learning algorithms
...Show More Authors

In this paper, we used four classification methods to classify objects and compareamong these methods, these are K Nearest Neighbor's (KNN), Stochastic Gradient Descentlearning (SGD), Logistic Regression Algorithm(LR), and Multi-Layer Perceptron (MLP). Weused MCOCO dataset for classification and detection the objects, these dataset image wererandomly divided into training and testing datasets at a ratio of 7:3, respectively. In randomlyselect training and testing dataset images, converted the color images to the gray level, thenenhancement these gray images using the histogram equalization method, resize (20 x 20) fordataset image. Principal component analysis (PCA) was used for feature extraction, andfinally apply four classification metho

... Show More