Active worms have posed a major security threat to the Internet, and many research efforts have focused on them. This paper is interested in internet worm that spreads via TCP, which accounts for the majority of internet traffic. It presents an approach that use a hybrid solution between two detection algorithms: behavior base detection and signature base detection to have the features of each of them. The aim of this study is to have a good solution of detecting worm and stealthy worm with the feature of the speed. This proposal was designed in distributed collaborative scheme based on the small-world network model to effectively improve the system performance.
The research aims to clarify the role of the main variable represented by the four dimensions of entrepreneurial behavior (creative, risk-taking, seizing opportunities, proactivity), in Reducing the dependent variable of organizational anomie with the dimensions (Organizational Normlessness, Organizational Cynicism, Organizational Valuelessness).
The experimental, analytical method was adopted in the completion of the research, and an intentional sample of (162) individuals in the administrative levels (higher and middle) in the factory was taken. The questionnaire was also adopted as the main tool, which
... Show MoreAbstract
Black paint laser peening (bPLP) technique is currently applied for many engineering materials , especially for aluminum alloys due to high improvement in fatigue life and strength . Constant and variable bending fatigue tests have been performed at RT and stress ratio R= -1 . The results of the present work observed that the significance of the surface work hardening which generated high negative residual stresses in bPLP specimens .The fatigue life improvement factor (FLIF) for bPLP constant fatigue behavior was from 2.543 to 3.3 compared to untreated fatigue and the increase in fatigue strength at 107 cycle was 21% . The bPLP cumulative fatigue life behav
... Show MoreThis work, introduces some concepts in bitopological spaces, which are nm-j-ω-converges to a subset, nm-j-ω-directed toward a set, nm-j-ω-closed mappings, nm-j-ω-rigid set, and nm-j-ω-continuous mappings. The mainline idea in this paper is nm-j-ω-perfect mappings in bitopological spaces such that n = 1,2 and m =1,2 n ≠ m. Characterizations concerning these concepts and several theorems are studied, where j = q , δ, a , pre, b, b.
The study consists of video clips of all cars parked in the selected area. The studied camera height is1.5 m, and the video clips are 18video clips. Images are extracted from the video clip to be used for training data for the cascade method. Cascade classification is used to detect license plates after the training step. Viola-jones algorithm was applied to the output of the cascade data for camera height (1.5m). The accuracy was calculated for all data with different weather conditions and local time recoding in two ways. The first used the detection of the car plate based on the video clip, and the accuracy was 100%. The second is using the clipped images stored in the positive file, based on the training file (XML file), where the ac
... Show MoreMalware represents one of the dangerous threats to computer security. Dynamic analysis has difficulties in detecting unknown malware. This paper developed an integrated multi – layer detection approach to provide more accuracy in detecting malware. User interface integrated with Virus Total was designed as a first layer which represented a warning system for malware infection, Malware data base within malware samples as a second layer, Cuckoo as a third layer, Bull guard as a fourth layer and IDA pro as a fifth layer. The results showed that the use of fifth layers was better than the use of a single detector without merging. For example, the efficiency of the proposed approach is 100% compared with 18% and 63% of Virus Total and Bel
... Show MoreEvolutionary algorithms are better than heuristic algorithms at finding protein complexes in protein-protein interaction networks (PPINs). Many of these algorithms depend on their standard frameworks, which are based on topology. Further, many of these algorithms have been exclusively examined on networks with only reliable interaction data. The main objective of this paper is to extend the design of the canonical and topological-based evolutionary algorithms suggested in the literature to cope with noisy PPINs. The design of the evolutionary algorithm is extended based on the functional domain of the proteins rather than on the topological domain of the PPIN. The gene ontology annotation in each molecular function, biological proce
... Show MoreABSTRACT this paper extends the literature on the elements and effect of financial literacy by investigating the elements of financial literacy and the impact of financial literacy on financial inclusion and savings. This research confirms the results of researches of other economies but exposes some dissimilarities as well. The principal factors of financial literacy are discovered to be government efficiency, educational level, income, economic performance and infrastructure. Both education levels and financial literacy are found to be meaningfully and positively linked to financial inclusion and savings in G20 economies
In this research a proposed technique is used to enhance the frame difference technique performance for extracting moving objects in video file. One of the most effective factors in performance dropping is noise existence, which may cause incorrect moving objects identification. Therefore it was necessary to find a way to diminish this noise effect. Traditional Average and Median spatial filters can be used to handle such situations. But here in this work the focus is on utilizing spectral domain through using Fourier and Wavelet transformations in order to decrease this noise effect. Experiments and statistical features (Entropy, Standard deviation) proved that these transformations can stand to overcome such problems in an elegant way.
... Show MoreDeepfake is a type of artificial intelligence used to create convincing images, audio, and video hoaxes and it concerns celebrities and everyone because they are easy to manufacture. Deepfake are hard to recognize by people and current approaches, especially high-quality ones. As a defense against Deepfake techniques, various methods to detect Deepfake in images have been suggested. Most of them had limitations, like only working with one face in an image. The face has to be facing forward, with both eyes and the mouth open, depending on what part of the face they worked on. Other than that, a few focus on the impact of pre-processing steps on the detection accuracy of the models. This paper introduces a framework design focused on this asp
... Show MoreA strong sign language recognition system can break down the barriers that separate hearing and speaking members of society from speechless members. A novel fast recognition system with low computational cost for digital American Sign Language (ASL) is introduced in this research. Different image processing techniques are used to optimize and extract the shape of the hand fingers in each sign. The feature extraction stage includes a determination of the optimal threshold based on statistical bases and then recognizing the gap area in the zero sign and calculating the heights of each finger in the other digits. The classification stage depends on the gap area in the zero signs and the number of opened fingers in the other signs as well as
... Show More