This study presents a practical method for solving fractional order delay variational problems. The fractional derivative is given in the Caputo sense. The suggested approach is based on the Laplace transform and the shifted Legendre polynomials by approximating the candidate function by the shifted Legendre series with unknown coefficients yet to be determined. The proposed method converts the fractional order delay variational problem into a set of (n + 1) algebraic equations, where the solution to the resultant equation provides us the unknown coefficients of the terminated series that have been utilized to approximate the solution to the considered variational problem. Illustrative examples are given to show that the recommended approach is applicable and accurate for solving such kinds of problems.
Market share is a major indication of business success. Understanding the impact of numerous economic factors on market share is critical to a company’s success. In this study, we examine the market shares of two manufacturers in a duopoly economy and present an optimal pricing approach for increasing a company’s market share. We create two numerical models based on ordinary differential equations to investigate market success. The first model takes into account quantity demand and investment in R&D, whereas the second model investigates a more realistic relationship between quantity demand and pricing.
In the present paper, three reliable iterative methods are given and implemented to solve the 1D, 2D and 3D Fisher’s equation. Daftardar-Jafari method (DJM), Temimi-Ansari method (TAM) and Banach contraction method (BCM) are applied to get the exact and numerical solutions for Fisher's equations. The reliable iterative methods are characterized by many advantages, such as being free of derivatives, overcoming the difficulty arising when calculating the Adomian polynomial boundaries to deal with nonlinear terms in the Adomian decomposition method (ADM), does not request to calculate Lagrange multiplier as in the Variational iteration method (VIM) and there is no need to create a homotopy like in the Homotopy perturbation method (H
... Show MoreAbstract
For sparse system identification,recent suggested algorithms are
-norm Least Mean Square (
-LMS), Zero-Attracting LMS (ZA-LMS), Reweighted Zero-Attracting LMS (RZA-LMS), and p-norm LMS (p-LMS) algorithms, that have modified the cost function of the conventional LMS algorithm by adding a constraint of coefficients sparsity. And so, the proposed algorithms are named
-ZA-LMS,
Monaural source separation is a challenging issue due to the fact that there is only a single channel available; however, there is an unlimited range of possible solutions. In this paper, a monaural source separation model based hybrid deep learning model, which consists of convolution neural network (CNN), dense neural network (DNN) and recurrent neural network (RNN), will be presented. A trial and error method will be used to optimize the number of layers in the proposed model. Moreover, the effects of the learning rate, optimization algorithms, and the number of epochs on the separation performance will be explored. Our model was evaluated using the MIR-1K dataset for singing voice separation. Moreover, the proposed approach achi
... Show MoreThis paper introduces a non-conventional approach with multi-dimensional random sampling to solve a cocaine abuse model with statistical probability. The mean Latin hypercube finite difference (MLHFD) method is proposed for the first time via hybrid integration of the classical numerical finite difference (FD) formula with Latin hypercube sampling (LHS) technique to create a random distribution for the model parameters which are dependent on time [Formula: see text]. The LHS technique gives advantage to MLHFD method to produce fast variation of the parameters’ values via number of multidimensional simulations (100, 1000 and 5000). The generated Latin hypercube sample which is random or non-deterministic in nature is further integ
... Show MoreFeature selection (FS) constitutes a series of processes used to decide which relevant features/attributes to include and which irrelevant features to exclude for predictive modeling. It is a crucial task that aids machine learning classifiers in reducing error rates, computation time, overfitting, and improving classification accuracy. It has demonstrated its efficacy in myriads of domains, ranging from its use for text classification (TC), text mining, and image recognition. While there are many traditional FS methods, recent research efforts have been devoted to applying metaheuristic algorithms as FS techniques for the TC task. However, there are few literature reviews concerning TC. Therefore, a comprehensive overview was systematicall
... Show MoreThe virtual decomposition control (VDC) is an efficient tool suitable to deal with the full-dynamics-based control problem of complex robots. However, the regressor-based adaptive control used by VDC to control every subsystem and to estimate the unknown parameters demands specific knowledge about the system physics. Therefore, in this paper, we focus on reorganizing the equation of the VDC for a serial chain manipulator using the adaptive function approximation technique (FAT) without needing specific system physics. The dynamic matrices of the dynamic equation of every subsystem (e.g. link and joint) are approximated by orthogonal functions due to the minimum approximation errors produced. The contr