The virtual decomposition control (VDC) is an efficient tool suitable to deal with the full-dynamics-based control problem of complex robots. However, the regressor-based adaptive control used by VDC to control every subsystem and to estimate the unknown parameters demands specific knowledge about the system physics. Therefore, in this paper, we focus on reorganizing the equation of the VDC for a serial chain manipulator using the adaptive function approximation technique (FAT) without needing specific system physics. The dynamic matrices of the dynamic equation of every subsystem (e.g. link and joint) are approximated by orthogonal functions due to the minimum approximation errors produced. The control, the virtual stability of every subsystem and the stability of the entire robotic system are proved in this work. Then the computational complexity of the FAT is compared with the regressor-based approach. Despite the apparent advantage of the FAT in avoiding the regressor matrix, its computational complexity can result in difficulties in the implementation because of the representation of the dynamic matrices of the link subsystem by two large sparse matrices. In effect, the FAT-based adaptive VDC requires further work for improving the representation of the dynamic matrices of the target subsystem. Two case studies are simulated by Matlab/Simulink: a 2-R manipulator and a 6-DOF planar biped robot for verification purposes.
The Sliding Mode Control (SMC) has been among powerful control techniques increasingly. Much attention is paid to both theoretical and practical aspects of disciplines due to their distinctive characteristics such as insensitivity to bounded matched uncertainties, reduction of the order of sliding equations of motion, decoupling mechanical systems design. In the current study, two-link robot performance in the Classical SMC is enhanced via Adaptive Sliding Mode Controller (ASMC) despite uncertainty, external disturbance, and coulomb friction. The key idea is abstracted as follows: switching gains are depressed to the low allowable values, resulting in decreased chattering motion and control's efforts of the two-link robo
... Show MoreThe denoising of a natural image corrupted by Gaussian noise is a problem in signal or image processing. Much work has been done in the field of wavelet thresholding but most of it was focused on statistical modeling of wavelet coefficients and the optimal choice of thresholds. This paper describes a new method for the suppression of noise in image by fusing the stationary wavelet denoising technique with adaptive wiener filter. The wiener filter is applied to the reconstructed image for the approximation coefficients only, while the thresholding technique is applied to the details coefficients of the transform, then get the final denoised image is obtained by combining the two results. The proposed method was applied by usin
... Show MoreMore than 95% of the industrial controllers in use today are PID or modified PID controllers. However, the PID is manually tuning to be responsive so that the Process Variable is rapidly and steady moved to track the set point with minimize overshoot and stable output. The paper presents generic teal-time PID controller architecture. The developed architecture is based on the adaption of each of the three controller parameters (PID) to be self- learning using individual least mean square algorithm (LMS). The adaptive PID is verified and compared with the classical PID. The rapid realization of the adaptive PID architecture allows the readily fabrication into a hardware version either ASIC or reconfigurable.
Although the Wiener filtering is the optimal tradeoff of inverse filtering and noise smoothing, in the case when the blurring filter is singular, the Wiener filtering actually amplify the noise. This suggests that a denoising step is needed to remove the amplified noise .Wavelet-based denoising scheme provides a natural technique for this purpose .
In this paper a new image restoration scheme is proposed, the scheme contains two separate steps : Fourier-domain inverse filtering and wavelet-domain image denoising. The first stage is Wiener filtering of the input image , the filtered image is inputted to adaptive threshold wavelet
... Show MoreIn this paper, a new method of selection variables is presented to select some essential variables from large datasets. The new model is a modified version of the Elastic Net model. The modified Elastic Net variable selection model has been summarized in an algorithm. It is applied for Leukemia dataset that has 3051 variables (genes) and 72 samples. In reality, working with this kind of dataset is not accessible due to its large size. The modified model is compared to some standard variable selection methods. Perfect classification is achieved by applying the modified Elastic Net model because it has the best performance. All the calculations that have been done for this paper are in
Objective: This study aimed to assessing new suggested technique of Physical Growth Curves (PGC) charts in
children under two years old of a non-probability sample.
Methodology: A non-probability sample of size (420) children under two years selected from 12 Primary
Health Care Centers in Diyala governorate during the period from 15th Nov. 2010 to 13th Mar. 2011
according to admix of a different properties together in one chart/or growth curve chart included in at least
weight, Height, and Head circumference.
Results: the results showed different properties that can be admix together in one chart/or growth curve
chart included in at least weight, Height, and Head circumference. And to overtake the problem of the norm
A three-stage learning algorithm for deep multilayer perceptron (DMLP) with effective weight initialisation based on sparse auto-encoder is proposed in this paper, which aims to overcome difficulties in training deep neural networks with limited training data in high-dimensional feature space. At the first stage, unsupervised learning is adopted using sparse auto-encoder to obtain the initial weights of the feature extraction layers of the DMLP. At the second stage, error back-propagation is used to train the DMLP by fixing the weights obtained at the first stage for its feature extraction layers. At the third stage, all the weights of the DMLP obtained at the second stage are refined by error back-propagation. Network structures an
... Show More