The goal of this study is to provide a new explicit iterative process method approach for solving maximal monotone(M.M )operators in Hilbert spaces utilizing a finite family of different types of mappings as( nonexpansive mappings,resolvent mappings and projection mappings. The findings given in this research strengthen and extend key previous findings in the literature. Then, utilizing various structural conditions in Hilbert space and variational inequality problems, we examine the strong convergence to nearest point projection for these explicit iterative process methods Under the presence of two important conditions for convergence, namely closure and convexity. The findings reported in this research strengthen and extend key previous findings from the literature
In 2010, Long and Zeng introduced a new generalization of the Bernstein polynomials that depends on a parameter and called -Bernstein polynomials. After that, in 2018, Lain and Zhou studied the uniform convergence for these -polynomials and obtained a Voronovaskaja-type asymptotic formula in ordinary approximation. This paper studies the convergence theorem and gives two Voronovaskaja-type asymptotic formulas of the sequence of -Bernstein polynomials in both ordinary and simultaneous approximations. For this purpose, we discuss the possibility of finding the recurrence relations of the -th order moment for these polynomials and evaluate the values of -Bernstein for the functions , is a non-negative integer
In this paper the definition of fuzzy normed space is recalled and its basic properties. Then the definition of fuzzy compact operator from fuzzy normed space into another fuzzy normed space is introduced after that the proof of an operator is fuzzy compact if and only if the image of any fuzzy bounded sequence contains a convergent subsequence is given. At this point the basic properties of the vector space FC(V,U)of all fuzzy compact linear operators are investigated such as when U is complete and the sequence ( ) of fuzzy compact operators converges to an operator T then T must be fuzzy compact. Furthermore we see that when T is a fuzzy compact operator and S is a fuzzy bounded operator then the composition TS and ST are fuzzy compact
... Show MoreIn this work, an analytical approximation solution is presented, as well as a comparison of the Variational Iteration Adomian Decomposition Method (VIADM) and the Modified Sumudu Transform Adomian Decomposition Method (M STADM), both of which are capable of solving nonlinear partial differential equations (NPDEs) such as nonhomogeneous Kertewege-de Vries (kdv) problems and the nonlinear Klein-Gordon. The results demonstrate the solution’s dependability and excellent accuracy.
Because the Coronavirus epidemic spread in Iraq, the COVID-19 epidemic of people quarantined due to infection is our application in this work. The numerical simulation methods used in this research are more suitable than other analytical and numerical methods because they solve random systems. Since the Covid-19 epidemic system has random variables coefficients, these methods are used. Suitable numerical simulation methods have been applied to solve the COVID-19 epidemic model in Iraq. The analytical results of the Variation iteration method (VIM) are executed to compare the results. One numerical method which is the Finite difference method (FD) has been used to solve the Coronavirus model and for comparison purposes. The numerical simulat
... Show MoreIn this paper, we will introduce a new concept of operators in b-Hilbert space, which is respected to self- adjoint operator and positive operator. Moreover we will show some of their properties as well as the relation between them.
The aim of this paper is to design fast neural networks to approximate periodic functions, that is, design a fully connected networks contains links between all nodes in adjacent layers which can speed up the approximation times, reduce approximation failures, and increase possibility of obtaining the globally optimal approximation. We training suggested network by Levenberg-Marquardt training algorithm then speeding suggested networks by choosing most activation function (transfer function) which having a very fast convergence rate for reasonable size networks. In all algorithms, the gradient of the performance function (energy function) is used to determine how to
... Show More