In this paper, we derive and prove the stability bounds of the momentum coefficient µ and the learning rate ? of the back propagation updating rule in Artificial Neural Networks .The theoretical upper bound of learning rate ? is derived and its practical approximation is obtained
BP algorithm is the most widely used supervised training algorithms for multi-layered feedforward neural net works. However, BP takes long time to converge and quite sensitive to the initial weights of a network. In this paper, a modified cuckoo search algorithm is used to get the optimal set of initial weights that will be used by BP algorithm. And changing the value of BP learning rate to improve the error convergence. The performance of the proposed hybrid algorithm is compared with the stan dard BP using simple data sets. The simulation result show that the proposed algorithm has improved the BP training in terms of quick convergence of the solution depending on the slope of the error graph.
In this research the results of applying Artificial Neural Networks with modified activation function to perform the online and offline identification of four Degrees of Freedom (4-DOF) Selective Compliance Assembly Robot Arm (SCARA) manipulator robot will be described. The proposed model of identification strategy consists of a feed-forward neural network with a modified activation function that operates in parallel with the SCARA robot model. Feed-Forward Neural Networks (FFNN) which have been trained online and offline have been used, without requiring any previous knowledge about the system to be identified. The activation function that is used in the hidden layer in FFNN is a modified version of the wavelet function. This approach ha
... Show MoreIn this research the results of applying Artificial Neural Networks with modified activation function to
perform the online and offline identification of four Degrees of Freedom (4-DOF) Selective Compliance
Assembly Robot Arm (SCARA) manipulator robot will be described. The proposed model of
identification strategy consists of a feed-forward neural network with a modified activation function that
operates in parallel with the SCARA robot model. Feed-Forward Neural Networks (FFNN) which have
been trained online and offline have been used, without requiring any previous knowledge about the
system to be identified. The activation function that is used in the hidden layer in FFNN is a modified
version of the wavelet func
The Back-Propagation (BP) is the best known and widely used learning algorithm in training multiple neural network. A vast variety of improvements to BP algorithm have been proposed since ninety’s. in this paper, the effects of changing the number of hidden neurons and activation equation are investigated. According to the simulation results, the convergence speed have been improved and become much faster by the previous two modifications on the BP algorithm.
The proposed design of neural network in this article is based on new accurate approach for training by unconstrained optimization, especially update quasi-Newton methods are perhaps the most popular general-purpose algorithms. A limited memory BFGS algorithm is presented for solving large-scale symmetric nonlinear equations, where a line search technique without derivative information is used. On each iteration, the updated approximations of Hessian matrix satisfy the quasi-Newton form, which traditionally served as the basis for quasi-Newton methods. On the basis of the quadratic model used in this article, we add a new update of quasi-Newton form. One innovative features of this form's is its ability to estimate the energy functio
... Show MoreArtificial Neural Networks (ANN) is one of the important statistical methods that are widely used in a range of applications in various fields, which simulates the work of the human brain in terms of receiving a signal, processing data in a human cell and sending to the next cell. It is a system consisting of a number of modules (layers) linked together (input, hidden, output). A comparison was made between three types of neural networks (Feed Forward Neural Network (FFNN), Back propagation network (BPL), Recurrent Neural Network (RNN). he study found that the lowest false prediction rate was for the recurrentt network architecture and using the Data on graduate students at the College of Administration and Economics, Univer
... Show MoreSoftware-defined networking (SDN) is an innovative network paradigm, offering substantial control of network operation through a network’s architecture. SDN is an ideal platform for implementing projects involving distributed applications, security solutions, and decentralized network administration in a multitenant data center environment due to its programmability. As its usage rapidly expands, network security threats are becoming more frequent, leading SDN security to be of significant concern. Machine-learning (ML) techniques for intrusion detection of DDoS attacks in SDN networks utilize standard datasets and fail to cover all classification aspects, resulting in under-coverage of attack diversity. This paper proposes a hybr
... Show MoreEnergy efficiency is a significant aspect in designing robust routing protocols for wireless sensor networks (WSNs). A reliable routing protocol has to be energy efficient and adaptive to the network size. To achieve high energy conservation and data aggregation, there are two major techniques, clusters and chains. In clustering technique, sensor networks are often divided into non-overlapping subsets called clusters. In chain technique, sensor nodes will be connected with the closest two neighbors, starting with the farthest node from the base station till the closest node to the base station. Each technique has its own advantages and disadvantages which motivate some researchers to come up with a hybrid routing algorit
... Show MoreThe aim of this paper is to approximate multidimensional functions f∈C(R^s) by developing a new type of Feedforward neural networks (FFNS) which we called it Greedy ridge function neural networks (GRGFNNS). Also, we introduce a modification to the greedy algorithm which is used to train the greedy ridge function neural networks. An error bound are introduced in Sobolov space. Finally, a comparison was made between the three algorithms (modified greedy algorithm, Backpropagation algorithm and the result in [1]).
This paper adapted the neural network for the estimating of the direction of arrival (DOA). It uses an unsupervised adaptive neural network with GHA algorithm to extract the principal components that in turn, are used by Capon method to estimate the DOA, where by the PCA neural network we take signal subspace only and use it in Capon (i.e. we will ignore the noise subspace, and take the signal subspace only).