Distributed Denial of Service (DDoS) attacks on Web-based services have grown in both number and sophistication with the rise of advanced wireless technology and modern computing paradigms. Detecting these attacks in the sea of communication packets is very important. There were a lot of DDoS attacks that were directed at the network and transport layers at first. During the past few years, attackers have changed their strategies to try to get into the application layer. The application layer attacks could be more harmful and stealthier because the attack traffic and the normal traffic flows cannot be told apart. Distributed attacks are hard to fight because they can affect real computing resources as well as network bandwidth. DDoS attacks can also be made with smart devices that connect to the Internet, which can be infected and used as botnets. They use Deep Learning (D.L.) techniques like Convolutional Neural Network (C.N.N.) and variants of Recurrent Neural Networks (R.N.N.), such as Long Short-Term Memory (L.S.T.M.), Bidirectional L.S.T.M., Stacked L.S.T.M., and the Gat G.R.U.. These techniques have been used to detect (DDoS) attacks. The Portmap.csv file from the most recent DDoS dataset, CICDDoS2019, has been used to test D.L. approaches. Before giving the data to the D.L. approaches, the data is cleaned up. The pre-processed dataset is used to train and test the D.L. approaches. In the paper, we show how the D.L. approach works with multiple models and how they compare to each other.
The study aims to use the European Excellence Model (EFQM) in assessing the institutional performance of the National Center for Administrative Development and Information Technology in order to determine the gap between the actual reality of the performance of the Center and the standards adopted in the model, in order to know the extent to which the Center seeks to achieve excellence in performance to improve the level of services provided and the adoption of methods Modern and contemporary management in the evaluation of its institutional performance.
The problem of the study was the absence of an institutional performance evaluation system at the centre whereby weaknesses (areas of improvement) and st
... Show MoreThis paper includes an experimental study of hydrogen mass flow rate and inlet hydrogen pressure effect on the fuel cell performance. Depending on the experimental results, a model of fuel cell based on artificial neural networks is proposed. A back propagation learning rule with the log-sigmoid activation function is adopted to construct neural networks model. Experimental data resulting from 36 fuel cell tests are used as a learning data. The hydrogen mass flow rate, applied load and inlet hydrogen pressure are inputs to fuel cell model, while the current and voltage are outputs. Proposed model could successfully predict the fuel cell performance in good agreement with actual data. This work is extended to developed fuel cell feedback
... Show MoreThe proposal of nonlinear models is one of the most important methods in time series analysis, which has a wide potential for predicting various phenomena, including physical, engineering and economic, by studying the characteristics of random disturbances in order to arrive at accurate predictions.
In this, the autoregressive model with exogenous variable was built using a threshold as the first method, using two proposed approaches that were used to determine the best cutting point of [the predictability forward (forecasting) and the predictability in the time series (prediction), through the threshold point indicator]. B-J seasonal models are used as a second method based on the principle of the two proposed approaches in dete
... Show MoreAdministrative procedures in various organizations produce numerous crucial records and data. These
records and data are also used in other processes like customer relationship management and accounting
operations.It is incredibly challenging to use and extract valuable and meaningful information from these data
and records because they are frequently enormous and continuously growing in size and complexity.Data
mining is the act of sorting through large data sets to find patterns and relationships that might aid in the data
analysis process of resolving business issues. Using data mining techniques, enterprises can forecast future
trends and make better business decisions.The Apriori algorithm has bee
Electrocardiogram (ECG) is an important physiological signal for cardiac disease diagnosis. With the increasing use of modern electrocardiogram monitoring devices that generate vast amount of data requiring huge storage capacity. In order to decrease storage costs or make ECG signals suitable and ready for transmission through common communication channels, the ECG data
volume must be reduced. So an effective data compression method is required. This paper presents an efficient technique for the compression of ECG signals. In this technique, different transforms have been used to compress the ECG signals. At first, a 1-D ECG data was segmented and aligned to a 2-D data array, then 2-D mixed transform was implemented to compress the
Global Navigation Satellite Systems (GNSS) have become an integral part of wide range of applications. One of these applications of GNSS is implementation of the cellular phone to locate the position of users and this technology has been employed in social media applications. Moreover, GNSS have been effectively employed in transportation, GIS, mobile satellite communications, and etc. On the other hand, the geomatics sciences use the GNSS for many practical and scientific applications such as surveying and mapping and monitoring, etc.
In this study, the GNSS raw data of ISER CORS, which is located in the North of Iraq, are processed and analyzed to build up coordinate time series for the purpose of detection the
... Show MoreRecommender Systems are tools to understand the huge amount of data available in the internet world. Collaborative filtering (CF) is one of the most knowledge discovery methods used positively in recommendation system. Memory collaborative filtering emphasizes on using facts about present users to predict new things for the target user. Similarity measures are the core operations in collaborative filtering and the prediction accuracy is mostly dependent on similarity calculations. In this study, a combination of weighted parameters and traditional similarity measures are conducted to calculate relationship among users over Movie Lens data set rating matrix. The advantages and disadvantages of each measure are spotted. From the study, a n
... Show MoreThe adsorption of zirconium, on manganese dioxide from nitric
acid solutions has been studied as a function of shaking time, concentration of electrolytes, concentration of adsorbate and temperature effects (25- 90°C).
Four hours of shaking was appropriate to ensure that the
adsorption plateau was reached and the adsorption of zirconium decrease with an increase in nitric acid concentration. The limiting adsorption capacities at 3 molar nitric acid was 0.2 Zr per mole of Mn02. Working at elevated temperature was in favour
... Show More