Traffic classification is referred to as the task of categorizing traffic flows into application-aware classes such as chats, streaming, VoIP, etc. Most systems of network traffic identification are based on features. These features may be static signatures, port numbers, statistical characteristics, and so on. Current methods of data flow classification are effective, they still lack new inventive approaches to meet the needs of vital points such as real-time traffic classification, low power consumption, ), Central Processing Unit (CPU) utilization, etc. Our novel Fast Deep Packet Header Inspection (FDPHI) traffic classification proposal employs 1 Dimension Convolution Neural Network (1D-CNN) to automatically learn more representational characteristics of traffic flow types; by considering only the position of the selected bits from the packet header. The proposal a learning approach based on deep packet inspection which integrates both feature extraction and classification phases into one system. The results show that the FDPHI works very well on the applications of feature learning. Also, it presents powerful adequate traffic classification results in terms of energy consumption (70% less power CPU utilization around 48% less), and processing time (310% for IPv4 and 595% for IPv6).
Most of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve B
... Show MoreThe issue of penalized regression model has received considerable critical attention to variable selection. It plays an essential role in dealing with high dimensional data. Arctangent denoted by the Atan penalty has been used in both estimation and variable selection as an efficient method recently. However, the Atan penalty is very sensitive to outliers in response to variables or heavy-tailed error distribution. While the least absolute deviation is a good method to get robustness in regression estimation. The specific objective of this research is to propose a robust Atan estimator from combining these two ideas at once. Simulation experiments and real data applications show that the p
... Show MoreLongitudinal data is becoming increasingly common, especially in the medical and economic fields, and various methods have been analyzed and developed to analyze this type of data.
In this research, the focus was on compiling and analyzing this data, as cluster analysis plays an important role in identifying and grouping co-expressed subfiles over time and employing them on the nonparametric smoothing cubic B-spline model, which is characterized by providing continuous first and second derivatives, resulting in a smoother curve with fewer abrupt changes in slope. It is also more flexible and can pick up on more complex patterns and fluctuations in the data.
The longitudinal balanced data profile was compiled into subgroup
... Show MoreCloud-based Electronic Health Records (EHRs) have seen a substantial increase in usage in recent years, especially for remote patient monitoring. Researchers are interested in investigating the use of Healthcare 4.0 in smart cities. This involves using Internet of Things (IoT) devices and cloud computing to remotely access medical processes. Healthcare 4.0 focuses on the systematic gathering, merging, transmission, sharing, and retention of medical information at regular intervals. Protecting the confidential and private information of patients presents several challenges in terms of thwarting illegal intrusion by hackers. Therefore, it is essential to prioritize the protection of patient medical data that is stored, accessed, and shared on
... Show More
Codes of red, green, and blue data (RGB) extracted from a lab-fabricated colorimeter device were used to build a proposed classifier with the objective of classifying colors of objects based on defined categories of fundamental colors. Primary, secondary, and tertiary colors namely red, green, orange, yellow, pink, purple, blue, brown, grey, white, and black, were employed in machine learning (ML) by applying an artificial neural network (ANN) algorithm using Python. The classifier, which was based on the ANN algorithm, required a definition of the mentioned eleven colors in the form of RGB codes in order to acquire the capability of classification. The software's capacity to forecast the color of the code that belongs to an ob
... Show MoreCodes of red, green, and blue data (RGB) extracted from a lab-fabricated colorimeter device were used to build a proposed classifier with the objective of classifying colors of objects based on defined categories of fundamental colors. Primary, secondary, and tertiary colors namely red, green, orange, yellow, pink, purple, blue, brown, grey, white, and black, were employed in machine learning (ML) by applying an artificial neural network (ANN) algorithm using Python. The classifier, which was based on the ANN algorithm, required a definition of the mentioned eleven colors in the form of RGB codes in order to acquire the capability of classification. The software's capacity to forecast the color of the code that belongs to an object under de
... Show MoreSemantic segmentation realization and understanding is a stringent task not just for computer vision but also in the researches of the sciences of earth, semantic segmentation decompose compound architectures in one elements, the most mutual object in a civil outside or inside senses must classified then reinforced with information meaning of all object, it’s a method for labeling and clustering point cloud automatically. Three dimensions natural scenes classification need a point cloud dataset to representation data format as input, many challenge appeared with working of 3d data like: little number, resolution and accurate of three Dimensional dataset . Deep learning now is the po
This investigation presents an experimental and analytical study on the behavior of reinforced concrete deep beams before and after repair. The original beams were first loaded under two points load up to failure, then, repaired by epoxy resin and tested again. Three of the test beams contains shear reinforcement and the other two beams have no shear reinforcement. The main variable in these beams was the percentage of longitudinal steel reinforcement (0, 0.707, 1.061, and 1.414%). The main objective of this research is to investigate the possibility of restoring the full load carrying capacity of the reinforced concrete deep beam with and without shear reinforcement by using epoxy resin as the material of repair. All be
... Show More