Preferred Language
Articles
/
xxfvQo8BVTCNdQwCeGfO
ON THE GREEDY RADIAL BASIS FUNCTION NEURAL NETWORKS FOR APPROXIMATION MULTIDIMENSIONAL FUNCTIONS
...Show More Authors

The aim of this paper is to approximate multidimensional functions by using the type of Feedforward neural networks (FFNNs) which is called Greedy radial basis function neural networks (GRBFNNs). Also, we introduce a modification to the greedy algorithm which is used to train the greedy radial basis function neural networks. An error bound are introduced in Sobolev space. Finally, a comparison was made between the three algorithms (modified greedy algorithm, Backpropagation algorithm and the result is published in [16]).

Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Mon Jun 30 2008
Journal Name
Iraqi Journal Of Science
On the Greedy Ridge Function Neural Networks for Approximation Multidimensional Functions
...Show More Authors

The aim of this paper is to approximate multidimensional functions f∈C(R^s) by developing a new type of Feedforward neural networks (FFNS) which we called it Greedy ridge function neural networks (GRGFNNS). Also, we introduce a modification to the greedy algorithm which is used to train the greedy ridge function neural networks. An error bound are introduced in Sobolov space. Finally, a comparison was made between the three algorithms (modified greedy algorithm, Backpropagation algorithm and the result in [1]).

Preview PDF
Publication Date
Wed May 24 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
On Comparison between Radial Basis Function and Wavelet Basis Functions Neural Networks
...Show More Authors

      In this paper we study and design two feed forward neural networks. The first approach uses radial basis function network and second approach uses wavelet basis function network to approximate the mapping from the input to the output space. The trained networks are then used in an conjugate gradient algorithm to estimate the output. These neural networks are then applied to solve differential equation. Results of applying these algorithms to several examples are presented

View Publication Preview PDF
Publication Date
Fri Dec 23 2011
Journal Name
International Journal Of The Physical Sciences
Fast prediction of power transfer stability index based on radial basis function neural network
...Show More Authors

View Publication
Scopus (16)
Crossref (4)
Scopus Crossref
Publication Date
Wed Sep 12 2018
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
A Comparison between Multi-Layer Perceptron and Radial Basis Function Networks in Detecting Humans Based on Object Shape
...Show More Authors

       Human detection represents a main problem of interest when using video based monitoring. In this paper, artificial neural networks, namely multilayer perceptron (MLP) and radial basis function (RBF) are used to detect humans among different objects in a sequence of frames (images) using classification approach. The classification used is based on the shape of the object instead of depending on the contents of the frame. Initially, background subtraction is depended to extract objects of interest from the frame, then statistical and geometric information are obtained from vertical and horizontal projections of the objects that are detected to stand for the shape of the object. Next to this step, two ty

... Show More
View Publication Preview PDF
Crossref
Publication Date
Sun Apr 23 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Influence Activation Function in Approximate Periodic Functions Using Neural Networks
...Show More Authors

The aim of this paper is to design fast neural networks to approximate periodic functions, that is, design a fully connected networks contains links between all nodes in adjacent layers which can speed up the approximation times, reduce approximation failures, and increase possibility of obtaining the globally optimal approximation. We training suggested network by Levenberg-Marquardt training algorithm then speeding suggested networks by choosing most activation function (transfer function) which having a very fast convergence rate for reasonable size networks.             In all algorithms, the gradient of the performance function (energy function) is used to determine how to

... Show More
View Publication Preview PDF
Publication Date
Sun Apr 26 2020
Journal Name
Iraqi Journal Of Science
Monotone Approximation by Quadratic Neural Network of Functions in Lp Spaces for p<1
...Show More Authors

Some researchers are interested in using the flexible and applicable properties of quadratic functions as activation functions for FNNs. We study the essential approximation rate of any Lebesgue-integrable monotone function by a neural network of quadratic activation functions. The simultaneous degree of essential approximation is also studied. Both estimates are proved to be within the second order of modulus of smoothness.

View Publication Preview PDF
Scopus (1)
Scopus Crossref
Publication Date
Tue Sep 19 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Density and Approximation by Using Feed Forward Artificial Neural Networks
...Show More Authors

I n  this  paper ,we 'viii  consider  the density  questions  associC;lted with  the single  hidden layer feed forward  model. We proved  that a FFNN   with   one   hidden   layer  can   uniformly   approximate   any continuous  function  in C(k)(where k is a compact set in R11 ) to any required accuracy.

 

However, if the set of basis function is dense then the ANN's can has al most one hidden layer. But if the set of basis function  non-dense, then we  need more  hidden layers. Also, we have shown  that there exist  localized functions and that there is no t

... Show More
View Publication Preview PDF
Publication Date
Sun Apr 30 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Fast Training Algorithms for Feed Forward Neural Networks
...Show More Authors

 The aim of this paper, is to discuss several high performance training algorithms fall into two main categories. The first category uses heuristic techniques, which were developed from an analysis of the performance of the standard gradient descent algorithm. The second category of fast algorithms uses standard numerical optimization techniques such as: quasi-Newton . Other aim is to solve the drawbacks related with these training algorithms and propose an efficient training algorithm for FFNN

View Publication Preview PDF
Publication Date
Sat Mar 11 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
On the Degree of Best Approximation of Unbounded Functions by Algebraic Polynomial
...Show More Authors

  In this paper we introduce a new class of degree of best algebraic approximation polynomial Α,, for unbounded functions in weighted space Lp,α(X), 1 ∞ .We shall prove direct and converse theorems for best algebraic approximation in terms modulus of smoothness in weighted space

View Publication Preview PDF
Publication Date
Wed Sep 20 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Modified Radial Based Neural Network for Clustering and Routing Optimal Path in Wireless Network
...Show More Authors

Several methods have been developed for routing problem in MANETs wireless network, because it considered very important problem in this network ,we suggested proposed method based on modified radial basis function networks RBFN and Kmean++ algorithm. The modification in RBFN for routing operation in order to find the optimal path between source and destination in MANETs clusters. Modified Radial Based Neural Network is very simple, adaptable and efficient method to increase the life time of nodes, packet delivery ratio and the throughput of the network will increase and connection become more useful because the optimal path has the best parameters from other paths including the best bitrate and best life link with minimum delays. The re

... Show More
View Publication Preview PDF