The virtual decomposition control (VDC) is an efficient tool suitable to deal with the full-dynamics-based control problem of complex robots. However, the regressor-based adaptive control used by VDC to control every subsystem and to estimate the unknown parameters demands specific knowledge about the system physics. Therefore, in this paper, we focus on reorganizing the equation of the VDC for a serial chain manipulator using the adaptive function approximation technique (FAT) without needing specific system physics. The dynamic matrices of the dynamic equation of every subsystem (e.g. link and joint) are approximated by orthogonal functions due to the minimum approximation errors produced. The control, the virtual stability of every subsystem and the stability of the entire robotic system are proved in this work. Then the computational complexity of the FAT is compared with the regressor-based approach. Despite the apparent advantage of the FAT in avoiding the regressor matrix, its computational complexity can result in difficulties in the implementation because of the representation of the dynamic matrices of the link subsystem by two large sparse matrices. In effect, the FAT-based adaptive VDC requires further work for improving the representation of the dynamic matrices of the target subsystem. Two case studies are simulated by Matlab/Simulink: a 2-R manipulator and a 6-DOF planar biped robot for verification purposes.
Stereolithography (SLA) has become an essential photocuring 3D printing process for producing parts of complex shapes from photosensitive resin exposed to UV light. The selection of the best printing parameters for good accuracy and surface quality can be further complicated by the geometric complexity of the models. This work introduces multiobjective optimization of SLA printing of 3D dental bridges based on simple CAD objects. The effect of the best combination of a low-cost resin 3D printer’s machine parameter settings, namely normal exposure time, bottom exposure time and bottom layers for less dimensional deviation and surface roughness, was studied. A multiobjective optimization method was utilized, combining the Taguchi me
... Show MorePolarization manipulation elements operating at visible wavelengths represent a critical component of quantum communication sub-systems, equivalent to their telecom wavelength counterparts. The method proposed involves rotating the optic axis of the polarized input light by an angle of 45 degree, thereby converting the fundamental transverse electric (TE0) mode to the fundamental transverse magnetic (TM0) mode. This paper outlines an integrated gallium phosphide-waveguide polarization rotator, which relies on the rotation of a horizontal slot by 45 degree at a wavelength of 700 nm. This will ultimately lead to the conception of a mode hybridization phenomenon in the waveguide. The simulation results demonstrate a polarization co
... Show MorePolarization manipulation elements operating at visible wavelengths represent a critical component of quantum communication sub-systems, equivalent to their telecom wavelength counterparts. The method proposed involves rotating the optic axis of the polarized input light by an angle of 45 degree, thereby converting the fundamental transverse electric (TE0) mode to the fundamental transverse magnetic (TM0) mode. This paper outlines an integrated gallium phosphide-waveguide polarization rotator, which relies on the rotation of a horizontal slot by 45 degree at a wavelength of 700 nm. This will ultimately lead to the conception of a mode hybridization phenomeno
A simple setup of random number generator is proposed. The random number generation is based on the shot-noise fluctuations in a p-i-n photodiode. These fluctuations that are defined as shot noise are based on a stationary random process whose statistical properties reflect Poisson statistics associated with photon streams. It has its origin in the quantum nature of light and it is related to vacuum fluctuations. Two photodiodes were used and their shot noise fluctuations were subtracted. The difference was applied to a comparator to obtain the random sequence.
Within the framework of big data, energy issues are highly significant. Despite the significance of energy, theoretical studies focusing primarily on the issue of energy within big data analytics in relation to computational intelligent algorithms are scarce. The purpose of this study is to explore the theoretical aspects of energy issues in big data analytics in relation to computational intelligent algorithms since this is critical in exploring the emperica aspects of big data. In this chapter, we present a theoretical study of energy issues related to applications of computational intelligent algorithms in big data analytics. This work highlights that big data analytics using computational intelligent algorithms generates a very high amo
... Show MoreThe futuristic age requires progress in handwork or even sub-machine dependency and Brain-Computer Interface (BCI) provides the necessary BCI procession. As the article suggests, it is a pathway between the signals created by a human brain thinking and the computer, which can translate the signal transmitted into action. BCI-processed brain activity is typically measured using EEG. Throughout this article, further intend to provide an available and up-to-date review of EEG-based BCI, concentrating on its technical aspects. In specific, we present several essential neuroscience backgrounds that describe well how to build an EEG-based BCI, including evaluating which signal processing, software, and hardware techniques to use. Individu
... Show MoreA Multiple System Biometric System Based on ECG Data
This work aims to develop a secure lightweight cipher algorithm for constrained devices. A secure communication among constrained devices is a critical issue during the data transmission from the client to the server devices. Lightweight cipher algorithms are defined as a secure solution for constrained devices that require low computational functions and small memory. In contrast, most lightweight algorithms suffer from the trade-off between complexity and speed in order to produce robust cipher algorithm. The PRESENT cipher has been successfully experimented on as a lightweight cryptography algorithm, which transcends other ciphers in terms of its computational processing that required low complexity operations. The mathematical model of
... Show More