<p><span>A Botnet is one of many attacks that can execute malicious tasks and develop continuously. Therefore, current research introduces a comparison framework, called BotDetectorFW, with classification and complexity improvements for the detection of Botnet attack using CICIDS2017 dataset. It is a free online dataset consist of several attacks with high-dimensions features. The process of feature selection is a significant step to obtain the least features by eliminating irrelated features and consequently reduces the detection time. This process implemented inside BotDetectorFW using two steps; data clustering and five distance measure formulas (cosine, dice, driver & kroeber, overlap, and pearson correlation) using C#, followed by selecting the best N features used as input into four classifier algorithms evaluated using machine learning (WEKA); multilayerperceptron, JRip, IBK, and random forest. In BotDetectorFW, the thoughtful and diligent cleaning of the dataset within the preprocessing stage beside the normalization, binary clustering of its features, followed by the adapting of feature selection based on suitable feature distance techniques, and finalized by testing of selected classification algorithms. All together contributed in satisfying the high-performance metrics using fewer features number (8 features as a minimum) compared to and outperforms other methods found in the literature that adopted (10 features or higher) using the same dataset. Furthermore, the results and performance evaluation of BotDetectorFM shows a competitive impact in terms of classification accuracy (ACC), precision (Pr), recall (Rc), and f-measure (F1) metrics.</span></p>
Data scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for
There are many images you need to large Khoznah space With the continued evolution of storage technology for computers, there is a need nailed required to reduce Alkhoznip space for pictures and image compression in a good way, the conversion method Alamueja
Image segmentation can be defined as a cutting or segmenting process of the digital image into many useful points which are called segmentation, that includes image elements contribute with certain attributes different form Pixel that constitute other parts. Two phases were followed in image processing by the researcher in this paper. At the beginning, pre-processing image on images was made before the segmentation process through statistical confidence intervals that can be used for estimate of unknown remarks suggested by Acho & Buenestado in 2018. Then, the second phase includes image segmentation process by using "Bernsen's Thresholding Technique" in the first phase. The researcher drew a conclusion that in case of utilizing
... Show MoreGypseous soils are common in several regions in the world including Iraq, where more than 28.6% of its surface is covered with this type of soil. This soil, with high gypsum content, causes different problems for construction and strategic projects. As a result of water flow through the soil mass, the permeability and chemical arrangement of these soils varies with time due to the solubility and leaching of gypsum. In this study, the soil of 36% gypsum content, was taken from one location about 100 km southwest of Baghdad, where the samples were taken from depths (0.5 - 1) m below the natural ground and mixed with (3%, 6%, 9%) of Copolymer and Novolac polymer to improve the engineering properties that include: collapsibility, perm
... Show MoreLarge quantities of contaminated carwash wastewater are produced per day from carwash places. Extensively it contains large quantities of chemicals from detergents, oil, grease, heavy metals, suspended solids, types of hydrocarbons, and biological contents. A novel electrocoagulation treatment by foil electrodes was conducted to remove COD, turbidity, Total Dissolved Solids (TDS) from contaminated carwash wastewater and decrease its Electrical Conductivity (EC). A thin layer of aluminum foil is used as an electrode in this treatment process. The effects of different voltage and treatment times were studied. The best result was found at a voltage of 30 volts and treatment time 90 minute where the removal efficiency of COD
... Show MoreIn this paper, the generation of a chaotic carrier by Lorenz model
is theoretically studied. The encoding techniques has been used is
chaos masking of sinusoidal signal (massage), an optical chaotic
communications system for different receiver configurations is
evaluated. It is proved that chaotic carriers allow the successful
encoding and decoding of messages. Focusing on the effect of
changing the initial conditions of the states of our dynamical system
e.i changing the values (x, y, z, x1, y1, and z1).
In despite of the expansion of using the dummy variables as a explanatory variables, but their using as a dependent variables is still limited, and the reason of that may be return to may problems when using dummy variables as a dependent variables. the study aimed to using the quality Response Models to Measuring Efficiency of cows farms by random sample including (19) farm from (Abi gherak district). The study estimating the transcendental logarithmic production function by using stochastic frontier Analysis (SFA) to interpret the relation between the return achieved from the cows farms as a dependent variables and each of labor and capital as an independent variables. the function indicates that increasing in labor by (100%) will
... Show MoreSurvival analysis is the analysis of data that are in the form of times from the origin of time until the occurrence of the end event, and in medical research, the origin of time is the date of registration of the individual or the patient in a study such as clinical trials to compare two types of medicine or more if the endpoint It is the death of the patient or the disappearance of the individual. The data resulting from this process is called survival times. But if the end is not death, the resulting data is called time data until the event. That is, survival analysis is one of the statistical steps and procedures for analyzing data when the adopted variable is time to event and time. It could be d
... Show More