Abstract:In this paper, some probability characteristics functions (moments, variances,convariance, and spectral density functions) are found depending upon the smallestvariance of the solution of some stochastic Fredholm integral equation contains as aknown function, the sine wave function
The aim of this paper is to approximate multidimensional functions f∈C(R^s) by developing a new type of Feedforward neural networks (FFNS) which we called it Greedy ridge function neural networks (GRGFNNS). Also, we introduce a modification to the greedy algorithm which is used to train the greedy ridge function neural networks. An error bound are introduced in Sobolov space. Finally, a comparison was made between the three algorithms (modified greedy algorithm, Backpropagation algorithm and the result in [1]).
In this paper, Min-Max composition fuzzy relation equation are studied. This study is a generalization of the works of Ohsato and Sekigushi. The conditions for the existence of solutions are studied, then the resolution of equations is discussed.
The aim of this paper is to approximate multidimensional functions by using the type of Feedforward neural networks (FFNNs) which is called Greedy radial basis function neural networks (GRBFNNs). Also, we introduce a modification to the greedy algorithm which is used to train the greedy radial basis function neural networks. An error bound are introduced in Sobolev space. Finally, a comparison was made between the three algorithms (modified greedy algorithm, Backpropagation algorithm and the result is published in [16]).
This paper considers approximate solution of the hyperbolic one-dimensional wave equation with nonlocal mixed boundary conditions by improved methods based on the assumption that the solution is a double power series based on orthogonal polynomials, such as Bernstein, Legendre, and Chebyshev. The solution is ultimately compared with the original method that is based on standard polynomials by calculating the absolute error to verify the validity and accuracy of the performance.
Maximum likelihood estimation method, uniformly minimum variance unbiased estimation method and minimum mean square error estimation, as classical estimation procedures, are frequently used for parameter estimation in statistics, which assuming the parameter is constant , while Bayes method assuming the parameter is random variable and hence the Bayes estimator is an estimator which minimize the Bayes risk for each value the random observable and for square error lose function the Bayes estimator is the posterior mean. It is well known that the Bayesian estimation is hardly used as a parameter estimation technique due to some difficulties to finding a prior distribution.
The interest of this paper is that
... Show MoreIn this paper we define and study new generalizations of continuous functions namely, -weakly (resp., w-closure, w-strongly) continuous and the main properties are studies: (a) If f : X®Y is w-weakly (resp., w-closure, w-strongly) continuous, then for any AÌX and any BÌY the restrictions fïA : A®Y and fB : f -1(B)®B are w-weakly (resp., w-closure, w-strongly) continuous. (b) Comparison between deferent forms of generalizations of continuous functions. (c) Relationship between compositions of deferent forms of generalizations of continuous functions. Moreover, we expanded the above generalizations and namely almost w-weakly (resp., w-closure, w-strongly) continuous functions and we state and prove several results concerning it.