With the development of communication technologies for mobile devices and electronic communications, and went to the world of e-government, e-commerce and e-banking. It became necessary to control these activities from exposure to intrusion or misuse and to provide protection to them, so it's important to design powerful and efficient systems-do-this-purpose. It this paper it has been used several varieties of algorithm selection passive immune algorithm selection passive with real values, algorithm selection with passive detectors with a radius fixed, algorithm selection with passive detectors, variable- sized intrusion detection network type misuse where the algorithm generates a set of detectors to distinguish the self-samples. Practical Experiments showed the process to achieve a high rate of detection in the system designer using data NSL-KDD with 12 field without vulnerability to change the radius of the detector or change the number of reagents were obtained as the ratio between detection (0.984, 0.998, 0.999) and the ratio between a false alarm (0.003, 0.002, 0.001). Contrary to the results of experiments conducted on data NSL-KDD with 41 field contact, which affected the rate of detection by changing the radius and the number of the detector as it has been to get the proportion of uncovered between (0.44, 0.824, 0.992) and the percentage of false alarm between (0.5, 0.175, 0.003).
Big data analysis is essential for modern applications in areas such as healthcare, assistive technology, intelligent transportation, environment and climate monitoring. Traditional algorithms in data mining and machine learning do not scale well with data size. Mining and learning from big data need time and memory efficient techniques, albeit the cost of possible loss in accuracy. We have developed a data aggregation structure to summarize data with large number of instances and data generated from multiple data sources. Data are aggregated at multiple resolutions and resolution provides a trade-off between efficiency and accuracy. The structure is built once, updated incrementally, and serves as a common data input for multiple mining an
... Show MoreThis research study Blur groups (Fuzzy Sets) which is the perception of the most modern in the application in various practical and theoretical areas and in various fields of life, was addressed to the fuzzy random variable whose value is not real, but the numbers Millbh because it expresses the mysterious phenomena or uncertain with measurements are not assertive. Fuzzy data were presented for binocular test and analysis of variance method of random Fuzzy variables , where this method depends on a number of assumptions, which is a problem that prevents the use of this method in the case of non-realized.
The great scientific progress has led to widespread Information as information accumulates in large databases is important in trying to revise and compile this vast amount of data and, where its purpose to extract hidden information or classified data under their relations with each other in order to take advantage of them for technical purposes.
And work with data mining (DM) is appropriate in this area because of the importance of research in the (K-Means) algorithm for clustering data in fact applied with effect can be observed in variables by changing the sample size (n) and the number of clusters (K)
... Show MoreIn the petroleum industry, multiphase flow dynamics within the tubing string have gained significant attention due to associated challenges. Accurately predicting pressure drops and wellbore pressures is crucial for the effective modeling of vertical lift performance (VLP). This study focuses on predicting the multiphase flow behavior in four wells located in the Faihaa oil field in southern Iraq, utilizing PIPESIM software. The process of selecting the most appropriate multiphase correlation was performed by utilizing production test data to construct a comprehensive survey data catalog. Subsequently, the results were compared with the correlations available within the PIPESIM software. The outcomes reveal that the Hagedorn and Brown (H
... Show MoreDegradation of soil quality is an inevitable consequence of modifications to the characteristics of the soil that contribute to a decrease in ecosystem services. Numerous stressors, including chemical, biological, and physical ones, as well as those originating from both natural and artificial sources. The most prevalent kind of soil contamination that contaminates soil biota is agrochemicals. Soil is the most common place for xenobiotic dumping, which makes it the most probable source of other natural resources' pollution, such as surface and ground waters, based on the results of several studies. The danger to the environment posed by polluted soils is influenced by a variety of biological and physicochemical mechanisms that regul
... Show MoreDegradation of soil quality is an inevitable consequence of modifications to the characteristics of the soil that contribute to a decrease in ecosystem services. Numerous stressors, including chemical, biological, and physical ones, as well as those originating from both natural and artificial sources. The most prevalent kind of soil contamination that contaminates soil biota is agrochemicals. Soil is the most common place for xenobiotic dumping, which makes it the most probable source of other natural resources' pollution, such as surface and ground waters, based on the results of several studies. The danger to the environment posed by polluted soils is influenced by a variety of biological and physicochemical mechanisms that regulate the
... Show More