Computer systems and networks are being used in almost every aspect of our daily life, the security threats to computers and networks have increased significantly. Usually, password-based user authentication is used to authenticate the legitimate user. However, this method has many gaps such as password sharing, brute force attack, dictionary attack and guessing. Keystroke dynamics is one of the famous and inexpensive behavioral biometric technologies, which authenticate a user based on the analysis of his/her typing rhythm. In this way, intrusion becomes more difficult because the password as well as the typing speed must match with the correct keystroke patterns. This thesis considers static keystroke dynamics as a transparent layer of the user for user authentication. Back Propagation Neural Network (BPNN) and the Probabilistic Neural Network (PNN) are used as a classifier to discriminate between the authentic and impostor users. Furthermore, four keystroke dynamics features namely: Dwell Time (DT), Flight Time (FT), Up-Up Time (UUT), and a mixture of (DT) and (FT) are extracted to verify whether the users could be properly authenticated. Two datasets (keystroke-1) and (keystroke-2) are used to show the applicability of the proposed Keystroke dynamics user authentication system. The best results obtained with lowest false rates and highest accuracy when using UUT compared with DT and FT features and comparable to combination of DT and FT, because of UUT as one direct feature that implicitly contained the two other features DT, and FT; that lead to build a new feature from the previous two features making the last feature having more capability to discriminate the authentic users from the impostors. In addition, authentication with UUT alone instead of the combination of DT and FT reduce the complexity and computational time of the neural network when compared with combination of DT and FT features.
This article deals with the approximate algorithm for two dimensional multi-space fractional bioheat equations (M-SFBHE). The application of the collection method will be expanding for presenting a numerical technique for solving M-SFBHE based on “shifted Jacobi-Gauss-Labatto polynomials” (SJ-GL-Ps) in the matrix form. The Caputo formula has been utilized to approximate the fractional derivative and to demonstrate its usefulness and accuracy, the proposed methodology was applied in two examples. The numerical results revealed that the used approach is very effective and gives high accuracy and good convergence.
A simple setup of random number generator is proposed. The random number generation is based on the shot-noise fluctuations in a p-i-n photodiode. These fluctuations that are defined as shot noise are based on a stationary random process whose statistical properties reflect Poisson statistics associated with photon streams. It has its origin in the quantum nature of light and it is related to vacuum fluctuations. Two photodiodes were used and their shot noise fluctuations were subtracted. The difference was applied to a comparator to obtain the random sequence.
Immunization is one of the most cost-effective and successful public health applications. The results of immunization are difficult to see as the incidence of disease occurrence is low while adverse effects following the immunization are noticeable, particularly if the vaccine was given to apparently healthy person. High safety expectations of population regarding the vaccines so they are more prone to hesitancy regarding presence of even small risk of adverse events which may lead to loss of pub
... Show MoreFinding a path solution in a dynamic environment represents a challenge for the robotics researchers, furthermore, it is the main issue for autonomous robots and manipulators since nowadays the world is looking forward to this challenge. The collision free path for robot in an environment with moving obstacles such as different objects, humans, animals or other robots is considered as an actual problem that needs to be solved. In addition, the local minima and sharp edges are the most common problems in all path planning algorithms. The main objective of this work is to overcome these problems by demonstrating the robot path planning and obstacle avoidance using D star (D*) algorithm based on Particle Swarm Optimization (PSO)
... Show MoreAlthough its wide utilization in microbial cultures, the one factor-at-a-time method, failed to find the true optimum, this is due to the interaction between optimized parameters which is not taken into account. Therefore, in order to find the true optimum conditions, it is necessary to repeat the one factor-at-a-time method in many sequential experimental runs, which is extremely time-consuming and expensive for many variables. This work is an attempt to enhance bioactive yellow pigment production by Streptomyces thinghirensis based on a statistical design. The yellow pigment demonstrated inhibitory effects against Escherichia coli and Staphylococcus aureus and was characterized by UV-vis spectroscopy which showed lambda maximum of
... Show MoreThe aim of the research is to identify the effectiveness of the educational pillars strategy based on Vygotsky's theory in mathematical achievement and information processing of first-grade intermediate students. In pursuit of the research objectives, the experimental method was used, and the quasi-experimental design was used for two equivalent groups, one control group taught traditionally and the other experi-mental taught according to the educational pillars strategy. The research sample consisted of (66) female students from the first intermediate grade, who were inten-tionally chosen after ensuring their equivalence, taking into account several factors, most notably chronological age and their level of mathematics, and they we
... Show MoreWithin the framework of big data, energy issues are highly significant. Despite the significance of energy, theoretical studies focusing primarily on the issue of energy within big data analytics in relation to computational intelligent algorithms are scarce. The purpose of this study is to explore the theoretical aspects of energy issues in big data analytics in relation to computational intelligent algorithms since this is critical in exploring the emperica aspects of big data. In this chapter, we present a theoretical study of energy issues related to applications of computational intelligent algorithms in big data analytics. This work highlights that big data analytics using computational intelligent algorithms generates a very high amo
... Show MoreAbstract
In the present study, composites were prepared by Hand lay-up molding. The composites constituents were epoxy resin as a matrix, 6% volume fractions of glass fibers (G.F) as reinforcement and 3%, 6% volume fractions of preparation natural material (Rice Husk Ash, Carrot Powder, and Sawdust) as filler. Studied the erosion wear behavior and coating by natural wastes (Rice Husk Ash) with epoxy resin after erosion. The results showed the non – reinforced epoxy have lower resistance erosion than natural based material composites and the specimen (Epoxy+6%glass fiber+6%RHA) has higher resistance erosion than composites reinforced with carrot powder and sawdust at 30cm , angle 60
... Show MoreThere has been a growing interest in the use of chaotic techniques for enabling secure communication in recent years. This need has been motivated by the emergence of a number of wireless services which require the channel to provide low bit error rates (BER) along with information security. The aim of such activity is to steal or distort the information being conveyed. Optical Wireless Systems (basically Free Space Optic Systems, FSO) are no exception to this trend. Thus, there is an urgent necessity to design techniques that can secure privileged information against unauthorized eavesdroppers while simultaneously protecting information against channel-induced perturbations and errors. Conventional cryptographic techniques are not designed
... Show MoreIn this article, we developed a new loss function, as the simplification of linear exponential loss function (LINEX) by weighting LINEX function. We derive a scale parameter, reliability and the hazard functions in accordance with upper record values of the Lomax distribution (LD). To study a small sample behavior performance of the proposed loss function using a Monte Carlo simulation, we make a comparison among maximum likelihood estimator, Bayesian estimator by means of LINEX loss function and Bayesian estimator using square error loss (SE) function. The consequences have shown that a modified method is the finest for valuing a scale parameter, reliability and hazard functions.