Software-defined networking (SDN) presents novel security and privacy risks, including distributed denial-of-service (DDoS) attacks. In response to these threats, machine learning (ML) and deep learning (DL) have emerged as effective approaches for quickly identifying and mitigating anomalies. To this end, this research employs various classification methods, including support vector machines (SVMs), K-nearest neighbors (KNNs), decision trees (DTs), multiple layer perceptron (MLP), and convolutional neural networks (CNNs), and compares their performance. CNN exhibits the highest train accuracy at 97.808%, yet the lowest prediction accuracy at 90.08%. In contrast, SVM demonstrates the highest prediction accuracy of 95.5%. As such, an SVM-based DDoS detection model shows superior performance. This comparative analysis offers a valuable insight into the development of efficient and accurate techniques for detecting DDoS attacks in SDN environments with less complexity and time.
A mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the others
... Show MoreA mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the
... Show MoreThe aim of the research is to study the comparison between (ARIMA) Auto Regressive Integrated Moving Average and(ANNs) Artificial Neural Networks models and to select the best one for prediction the monthly relative humidity values depending upon the standard errors between estimated and observe values . It has been noted that both can be used for estimation and the best on among is (ANNs) as the values (MAE,RMSE, R2) is )0.036816,0.0466,0.91) respectively for the best formula for model (ARIMA) (6,0,2)(6,0,1) whereas the values of estimates relative to model (ANNs) for the best formula (5,5,1) is (0.0109, 0.0139 ,0.991) respectively. so that model (ANNs) is superior than (ARIMA) in a such evaluation.
Many of the dynamic processes in different sciences are described by models of differential equations. These models explain the change in the behavior of the studied process over time by linking the behavior of the process under study with its derivatives. These models often contain constant and time-varying parameters that vary according to the nature of the process under study in this We will estimate the constant and time-varying parameters in a sequential method in several stages. In the first stage, the state variables and their derivatives are estimated in the method of penalized splines(p- splines) . In the second stage we use pseudo lest square to estimate constant parameters, For the third stage, the rem
... Show MoreThis deals with estimation of Reliability function and one shape parameter (?) of two- parameters Burr – XII , when ?(shape parameter is known) (?=0.5,1,1.5) and also the initial values of (?=1), while different sample shze n= 10, 20, 30, 50) bare used. The results depend on empirical study through simulation experiments are applied to compare the four methods of estimation, as well as computing the reliability function . The results of Mean square error indicates that Jacknif estimator is better than other three estimators , for all sample size and parameter values
Researchers used different methods such as image processing and machine learning techniques in addition to medical instruments such as Placido disc, Keratoscopy, Pentacam;to help diagnosing variety of diseases that affect the eye. Our paper aims to detect one of these diseases that affect the cornea, which is Keratoconus. This is done by using image processing techniques and pattern classification methods. Pentacam is the device that is used to detect the cornea’s health; it provides four maps that can distinguish the changes on the surface of the cornea which can be used for Keratoconus detection. In this study, sixteen features were extracted from the four refractive maps along with five readings from the Pentacam software. The
... Show MorePresents here in the results of comparison between the theoretical equation stated by Huang and Menq and laboratory model tests used to study the bearing capacity of square footing on geogrid-reinforced loose sand by performing model tests. The effects of several parameters were studied in order to study the general behavior of improving the soil by using the geogrid. These parameters include depth of first layer of reinforcement, vertical spacing of reinforcement layers, number of reinforcement layers and types of reinforcement layers The results show that the theoretical equation can be used to estimate the bearing capacity of loose sand.
This paper deals with defining Burr-XII, and how to obtain its p.d.f., and CDF, since this distribution is one of failure distribution which is compound distribution from two failure models which are Gamma model and weibull model. Some equipment may have many important parts and the probability distributions representing which may be of different types, so found that Burr by its different compound formulas is the best model to be studied, and estimated its parameter to compute the mean time to failure rate. Here Burr-XII rather than other models is consider because it is used to model a wide variety of phenomena including crop prices, household income, option market price distributions, risk and travel time. It has two shape-parame
... Show MoreThis paper is interested in comparing the performance of the traditional methods to estimate parameter of exponential distribution (Maximum Likelihood Estimator, Uniformly Minimum Variance Unbiased Estimator) and the Bayes Estimator in the case of data to meet the requirement of exponential distribution and in the case away from the distribution due to the presence of outliers (contaminated values). Through the employment of simulation (Monte Carlo method) and the adoption of the mean square error (MSE) as criterion of statistical comparison between the performance of the three estimators for different sample sizes ranged between small, medium and large (n=5,10,25,50,100) and different cases (wit
... Show MoreIn this study, we derived the estimation for Reliability of the Exponential distribution based on the Bayesian approach. In the Bayesian approach, the parameter of the Exponential distribution is assumed to be random variable .We derived posterior distribution the parameter of the Exponential distribution under four types priors distributions for the scale parameter of the Exponential distribution is: Inverse Chi-square distribution, Inverted Gamma distribution, improper distribution, Non-informative distribution. And the estimators for Reliability is obtained using the two proposed loss function in this study which is based on the natural logarithm for Reliability function .We used simulation technique, to compare the
... Show More