This paper is concerned with introducing and studying the first new approximation operators using mixed degree system and second new approximation operators using mixed degree system which are the core concept in this paper. In addition, the approximations of graphs using the operators first lower and first upper are accurate then the approximations obtained by using the operators second lower and second upper sincefirst accuracy less then second accuracy. For this reason, we study in detail the properties of second lower and second upper in this paper. Furthermore, we summarize the results for the properties of approximation operators second lower and second upper when the graph G is arbitrary, serial 1, serial 2, reflexive, symmetric, transitive, tolerance, dominance and equivalence in table.
Using the Neural network as a type of associative memory will be introduced in this paper through the problem of mobile position estimation where mobile estimate its location depending on the signal strength reach to it from several around base stations where the neural network can be implemented inside the mobile. Traditional methods of time of arrival (TOA) and received signal strength (RSS) are used and compared with two analytical methods, optimal positioning method and average positioning method. The data that are used for training are ideal since they can be obtained based on geometry of CDMA cell topology. The test of the two methods TOA and RSS take many cases through a nonlinear path that MS can move through tha
... Show MoreUsing the Neural network as a type of associative memory will be introduced in this paper through the problem of mobile position estimation where mobile estimate its location depending on the signal strength reach to it from several around base stations where the neural network can be implemented inside the mobile. Traditional methods of time of arrival (TOA) and received signal strength (RSS) are used and compared with two analytical methods, optimal positioning method and average positioning method. The data that are used for training are ideal since they can be obtained based on geometry of CDMA cell topology. The test of the two methods TOA and RSS take many cases through a nonlinear path that MS can move through that region. The result
... Show MoreIn this work, a weighted H lder function that approximates a Jacobi polynomial which solves the second order singular Sturm-Liouville equation is discussed. This is generally equivalent to the Jacobean translations and the moduli of smoothness. This paper aims to focus on improving methods of approximation and finding the upper and lower estimates for the degree of approximation in weighted H lder spaces by modifying the modulus of continuity and smoothness. Moreover, some properties for the moduli of smoothness with direct and inverse results are considered.
In This paper, we have been approximated Grűnwald-Letnikov Derivative of a function having m continuous derivatives by Bernstein Chlodowsky polynomials with proving its best approximation. As well as we have been solved Bagley-Torvik equation and Fokker–Planck equation where the derivative is in Grűnwald-Letnikov sense.
The regressor-based adaptive control is useful for controlling robotic systems with uncertain parameters but with known structure of robot dynamics. Unmodeled dynamics could lead to instability problems unless modification of control law is used. In addition, exact calculation of regressor for robots with more than 6 degrees of freedom is hard to be calculated, and the task could be more complex for robots. Whereas the adaptive approximation control is a powerful tool for controlling robotic systems with unmodeled dynamics. The local (partitioned) approximation-based adaptive control includes representation of the uncertain matrices and vectors in the robot model as finite combinations of basis functions. Update laws for the weighting matri
... Show MoreThe aim of this paper is to approximate multidimensional functions f∈C(R^s) by developing a new type of Feedforward neural networks (FFNS) which we called it Greedy ridge function neural networks (GRGFNNS). Also, we introduce a modification to the greedy algorithm which is used to train the greedy ridge function neural networks. An error bound are introduced in Sobolov space. Finally, a comparison was made between the three algorithms (modified greedy algorithm, Backpropagation algorithm and the result in [1]).
The aim of this paper, is to study different iteration algorithms types two steps called, modified SP, Ishikawa, Picard-S iteration and M-iteration, which is faster than of others by using like contraction mappings. On the other hand, the M-iteration is better than of modified SP, Ishikawa and Picard-S iterations. Also, we support our analytic proof with a numerical example.