Preferred Language
Articles
/
wBfmQo8BVTCNdQwCRmeE
On the Greedy Ridge Function Neural Networks for Approximation Multidimensional Functions
...Show More Authors

The aim of this paper is to approximate multidimensional functions f∈C(R^s) by developing a new type of Feedforward neural networks (FFNS) which we called it Greedy ridge function neural networks (GRGFNNS). Also, we introduce a modification to the greedy algorithm which is used to train the greedy ridge function neural networks. An error bound are introduced in Sobolov space. Finally, a comparison was made between the three algorithms (modified greedy algorithm, Backpropagation algorithm and the result in [1]).

Preview PDF
Quick Preview PDF
Publication Date
Fri Jun 01 2007
Journal Name
Journal Of Al-nahrain University Science
ON THE GREEDY RADIAL BASIS FUNCTION NEURAL NETWORKS FOR APPROXIMATION MULTIDIMENSIONAL FUNCTIONS
...Show More Authors

The aim of this paper is to approximate multidimensional functions by using the type of Feedforward neural networks (FFNNs) which is called Greedy radial basis function neural networks (GRBFNNs). Also, we introduce a modification to the greedy algorithm which is used to train the greedy radial basis function neural networks. An error bound are introduced in Sobolev space. Finally, a comparison was made between the three algorithms (modified greedy algorithm, Backpropagation algorithm and the result is published in [16]).

View Publication Preview PDF
Crossref (1)
Crossref
Publication Date
Sun Apr 23 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Influence Activation Function in Approximate Periodic Functions Using Neural Networks
...Show More Authors

The aim of this paper is to design fast neural networks to approximate periodic functions, that is, design a fully connected networks contains links between all nodes in adjacent layers which can speed up the approximation times, reduce approximation failures, and increase possibility of obtaining the globally optimal approximation. We training suggested network by Levenberg-Marquardt training algorithm then speeding suggested networks by choosing most activation function (transfer function) which having a very fast convergence rate for reasonable size networks.             In all algorithms, the gradient of the performance function (energy function) is used to determine how to

... Show More
View Publication Preview PDF
Publication Date
Wed May 24 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
On Comparison between Radial Basis Function and Wavelet Basis Functions Neural Networks
...Show More Authors

      In this paper we study and design two feed forward neural networks. The first approach uses radial basis function network and second approach uses wavelet basis function network to approximate the mapping from the input to the output space. The trained networks are then used in an conjugate gradient algorithm to estimate the output. These neural networks are then applied to solve differential equation. Results of applying these algorithms to several examples are presented

View Publication Preview PDF
Publication Date
Sun Apr 26 2020
Journal Name
Iraqi Journal Of Science
Monotone Approximation by Quadratic Neural Network of Functions in Lp Spaces for p<1
...Show More Authors

Some researchers are interested in using the flexible and applicable properties of quadratic functions as activation functions for FNNs. We study the essential approximation rate of any Lebesgue-integrable monotone function by a neural network of quadratic activation functions. The simultaneous degree of essential approximation is also studied. Both estimates are proved to be within the second order of modulus of smoothness.

View Publication Preview PDF
Scopus (1)
Scopus Crossref
Publication Date
Tue Sep 19 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Density and Approximation by Using Feed Forward Artificial Neural Networks
...Show More Authors

I n  this  paper ,we 'viii  consider  the density  questions  associC;lted with  the single  hidden layer feed forward  model. We proved  that a FFNN   with   one   hidden   layer  can   uniformly   approximate   any continuous  function  in C(k)(where k is a compact set in R11 ) to any required accuracy.

 

However, if the set of basis function is dense then the ANN's can has al most one hidden layer. But if the set of basis function  non-dense, then we  need more  hidden layers. Also, we have shown  that there exist  localized functions and that there is no t

... Show More
View Publication Preview PDF
Publication Date
Sun Apr 01 2018
Journal Name
Journal Of Economics And Administrative Sciences
Estimate Kernel Ridge Regression Function in Multiple Regression
...Show More Authors

             In general, researchers and statisticians in particular have been usually used non-parametric regression models when the parametric methods failed to fulfillment their aim to analyze the models  precisely. In this case the parametic methods are useless so they turn to non-parametric methods for its easiness in programming. Non-parametric methods can also used to assume the parametric regression model for subsequent use. Moreover, as an advantage of using non-parametric methods is to solve the problem of Multi-Colinearity between explanatory variables combined with nonlinear data. This problem can be solved by using kernel ridge regression which depend o

... Show More
View Publication Preview PDF
Crossref
Publication Date
Sun Apr 30 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Fast Training Algorithms for Feed Forward Neural Networks
...Show More Authors

 The aim of this paper, is to discuss several high performance training algorithms fall into two main categories. The first category uses heuristic techniques, which were developed from an analysis of the performance of the standard gradient descent algorithm. The second category of fast algorithms uses standard numerical optimization techniques such as: quasi-Newton . Other aim is to solve the drawbacks related with these training algorithms and propose an efficient training algorithm for FFNN

View Publication Preview PDF
Publication Date
Sat Mar 11 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
On the Degree of Best Approximation of Unbounded Functions by Algebraic Polynomial
...Show More Authors

  In this paper we introduce a new class of degree of best algebraic approximation polynomial Α,, for unbounded functions in weighted space Lp,α(X), 1 ∞ .We shall prove direct and converse theorems for best algebraic approximation in terms modulus of smoothness in weighted space

View Publication Preview PDF
Publication Date
Wed Mar 10 2021
Journal Name
Baghdad Science Journal
On Training Of Feed Forward Neural Networks
...Show More Authors

In this paper we describe several different training algorithms for feed forward neural networks(FFNN). In all of these algorithms we use the gradient of the performance function, energy function, to determine how to adjust the weights such that the performance function is minimized, where the back propagation algorithm has been used to increase the speed of training. The above algorithms have a variety of different computation and thus different type of form of search direction and storage requirements, however non of the above algorithms has a global properties which suited to all problems.

View Publication Preview PDF
Publication Date
Sun Dec 02 2012
Journal Name
Baghdad Science Journal
Stability of Back Propagation Training Algorithm for Neural Networks
...Show More Authors

In this paper, we derive and prove the stability bounds of the momentum coefficient µ and the learning rate ? of the back propagation updating rule in Artificial Neural Networks .The theoretical upper bound of learning rate ? is derived and its practical approximation is obtained

View Publication Preview PDF
Crossref