We consider the problem of calibrating range measurements of a Light Detection and Ranging (lidar) sensor that is dealing with the sensor nonlinearity and heteroskedastic, range-dependent, measurement error. We solved the calibration problem without using additional hardware, but rather exploiting assumptions on the environment surrounding the sensor during the calibration procedure. More specifically we consider the assumption of calibrating the sensor by placing it in an environment so that its measurements lie in a 2D plane that is parallel to the ground. Then, its measurements come from fixed objects that develop orthogonally w.r.t. the ground, so that they may be considered as fixed points in an inertial reference frame. Moreover, we consider the intuition that moving the distance sensor within this environment implies that its measurements should be such that the relative distances and angles among the fixed points above remain the same. We thus exploit this intuition to cast the sensor calibration problem as making its measurements comply with this assumption that “fixed features shall have fixed relative distances and angles”. The resulting calibration procedure does thus not need to use additional (typically expensive) equipment, nor deploy special hardware. As for the proposed estimation strategies, from a mathematical perspective we consider models that lead to analytically solvable equations, so to enable deployment in embedded systems. Besides proposing the estimators we moreover analyze their statistical performance both in simulation and with field tests. We report the dependency of the MSE performance of the calibration procedure as a function of the sensor noise levels, and observe that in field tests the approach can lead to a tenfold improvement in the accuracy of the raw measurements.
Rapid worldwide urbanization and drastic population growth have increased the demand for new road construction, which will cause a substantial amount of natural resources such as aggregates to be consumed. The use of recycled concrete aggregate could be one of the possible ways to offset the aggregate shortage problem and reduce environmental pollution. This paper reports an experimental study of unbound granular material using recycled concrete aggregate for pavement subbase construction. Five percentages of recycled concrete aggregate obtained from two different sources with an originally designed compressive strength of 20–30 MPa as well as 31–40 MPa at three particle size levels, i.e., coarse, fine, and extra fine, were test
... Show MoreThis study focuses on studying the effect of reinforced steel in detail, and steel reinforcement (tensile ratio, compression ratio, size, and joint angle shape) on the strength of reinforced concrete (compressive strength) Fc' and searching for the most accurate details of concrete divisions, their behavior, and corner resistance of reinforced concrete joint. The comparison of this paper with previous studies, especially in the studied properties. The conclusions of the chapter are summarized that these effects had a clear effect and a specific effect on the behavior and resistance of the reinforced concrete corner joints under the negative moments and under their influence and the resulting stress conditions. The types of defects that can
... Show MoreThis work studied the facilitation of the transportation of Sharqi Baghdad heavy crude oil characterized with high viscosity 51.6 cSt at 40 °C, low API 18.8, and high asphaltenes content 7.1 wt.%, by reducing its viscosity from break down asphaltene agglomerates using different types of hydrocarbon and oxygenated polar solvents such as toluene, methanol, mix xylenes, and reformate. The best results are obtained by using methanol because it owns a high efficiency to reduce viscosity of crude oil to 21.1 cSt at 40 °C. Toluene, xylenes and reformate decreased viscosity to 25.3, 27.5 and 28,4 cSt at 40 °C, respectively. Asphaltenes content decreased to 4.2 wt. % by using toluene at 110 °C. And best improvement in API of the heavy crude o
... Show MoreElectrical Discharge Machining (EDM) is a widespread Nontraditional Machining (NTM) processes for manufacturing of a complicated geometry or very hard metals parts that are difficult to machine by traditional machining operations. Electrical discharge machining is a material removal (MR) process characterized by using electrical discharge erosion. This paper discusses the optimal parameters of EDM on high-speed steel (HSS) AISI M2 as a workpiece using copper and brass as an electrode. The input parameters used for experimental work are current (10, 24 and 42 A), pulse on time (100, 150 and 200 µs), and pulse off time (4, 12 and 25 µs) that have effect on the material removal rate (MRR), electrode wear rate (EWR) and wear ratio (WR). A
... Show MoreIn this work, the Whittaker wave functions were used to study the nuclear density distributions and elastic electron scattering charge form factors for proton-rich nuclei and their corresponding stable nuclei (10,8B, 13,9C, 14,12N and 19,17F). The parameters of Whittaker’s basis were fixed to generate the experimental values of available size radii. The Whittaker basis was connected to harmonic-oscillator basis through boundary condition at match point. The nuclear shell model was opted with pure configuration for all studied nuclei to compute aforementioned studied quantities except 10
This study includes using green or biosynthesis-friendly technology, which is effective in terms of low cost and low time and energy to prepare V2O5NPs nanoparticles from vanadium sulfate VSO4.H2O using aqueous extract of Punica Granatum at a concentration of 0.1M and with a basic medium PH= 8-12. The V2O5NPs nanoparticles were diagnosed using several techniques, such as FT-IR, UV-visible with energy gap Eg = 3.734eV, and the X-Ray diffraction XRD was calculated using the Debye Scherrer equation. It was discovered to be 34.39nm, Scanning Electron Microscope (SEM), Transmission Electron Microscopy TEM. The size, structure, and composition of synthetic V2O5
... Show MoreThe consumption of dried bananas has increased because they contain essential nutrients. In order to preserve bananas for a longer period, a drying process is carried out, which makes them a light snack that does not spoil quickly. On the other hand, machine learning algorithms can be used to predict the sweetness of dried bananas. The article aimed to study the effect of different drying times (6, 8, and 10 hours) using an air dryer on some physical and chemical characteristics of bananas, including CIE-L*a*b, water content, carbohydrates, and sweetness. Also predicting the sweetness of dried bananas based on the CIE-L*a*b ratios using machine learn- ing algorithms RF, SVM, LDA, KNN, and CART. The results showed that increasing the drying
... Show MoreThe successful implementation of deep learning nets opens up possibilities for various applications in viticulture, including disease detection, plant health monitoring, and grapevine variety identification. With the progressive advancements in the domain of deep learning, further advancements and refinements in the models and datasets can be expected, potentially leading to even more accurate and efficient classification systems for grapevine leaves and beyond. Overall, this research provides valuable insights into the potential of deep learning for agricultural applications and paves the way for future studies in this domain. This work employs a convolutional neural network (CNN)-based architecture to perform grapevine leaf image classifi
... Show More