We consider the problem of calibrating range measurements of a Light Detection and Ranging (lidar) sensor that is dealing with the sensor nonlinearity and heteroskedastic, range-dependent, measurement error. We solved the calibration problem without using additional hardware, but rather exploiting assumptions on the environment surrounding the sensor during the calibration procedure. More specifically we consider the assumption of calibrating the sensor by placing it in an environment so that its measurements lie in a 2D plane that is parallel to the ground. Then, its measurements come from fixed objects that develop orthogonally w.r.t. the ground, so that they may be considered as fixed points in an inertial reference frame. Moreover, we consider the intuition that moving the distance sensor within this environment implies that its measurements should be such that the relative distances and angles among the fixed points above remain the same. We thus exploit this intuition to cast the sensor calibration problem as making its measurements comply with this assumption that “fixed features shall have fixed relative distances and angles”. The resulting calibration procedure does thus not need to use additional (typically expensive) equipment, nor deploy special hardware. As for the proposed estimation strategies, from a mathematical perspective we consider models that lead to analytically solvable equations, so to enable deployment in embedded systems. Besides proposing the estimators we moreover analyze their statistical performance both in simulation and with field tests. We report the dependency of the MSE performance of the calibration procedure as a function of the sensor noise levels, and observe that in field tests the approach can lead to a tenfold improvement in the accuracy of the raw measurements.
In this paper two main stages for image classification has been presented. Training stage consists of collecting images of interest, and apply BOVW on these images (features extraction and description using SIFT, and vocabulary generation), while testing stage classifies a new unlabeled image using nearest neighbor classification method for features descriptor. Supervised bag of visual words gives good result that are present clearly in the experimental part where unlabeled images are classified although small number of images are used in the training process.
This study was aimed to determine a phytotoxicity experiment with kerosene as a model of a total petroleum hydrocarbon (TPHs) as Kerosene pollutant at different concentrations (1% and 6%) with aeration rate (0 and 1 L/min) and retention time (7, 14, 21, 28 and 42 days), was carried out in a subsurface flow system (SSF) on the Barley wetland. It was noted that greatest elimination 95.7% recorded at 1% kerosene levels and aeration rate 1L / min after a period of 42 days of exposure; whereas it was 47% in the control test without plants. Furthermore, the percent of elimination efficiencies of hydrocarbons from the soil was ranged between 34.155%-95.7% for all TPHs (Kerosene) concentrations at aeration rate (0 and 1 L/min). The Barley c
... Show MoreThe segmentation of aerial images using different clustering techniques offers valuable insights into interpreting and analyzing such images. By partitioning the images into meaningful regions, clustering techniques help identify and differentiate various objects and areas of interest, facilitating various applications, including urban planning, environmental monitoring, and disaster management. This paper aims to segment color aerial images to provide a means of organizing and understanding the visual information contained within the image for various applications and research purposes. It is also important to look into and compare the basic workings of three popular clustering algorithms: K-Medoids, Fuzzy C-Mean (FCM), and Gaussia
... Show MoreSome problems want to be solved in image compression to make the process workable and more efficient. Much work had been done in the field of lossy image compression based on wavelet and Discrete Cosine Transform (DCT). In this paper, an efficient image compression scheme is proposed, based on a common encoding transform scheme; It consists of the following steps: 1) bi-orthogonal (tab 9/7) wavelet transform to split the image data into sub-bands, 2) DCT to de-correlate the data, 3) the combined transform stage's output is subjected to scalar quantization before being mapped to positive, 4) and LZW encoding to produce the compressed data. The peak signal-to-noise (PSNR), compression ratio (CR), and compression gain (CG) measures were used t
... Show MoreDifferent solvents (light naphtha, n-heptane, and n-hexane) are used to treat Iraqi Atmospheric oil residue by the deasphalting process. Oil residue from Al-Dura refinery with specific gravity 0.9705, API 14.9, and 0.5 wt. % sulfur content was used. Deasphalting oil (DAO) was examined on a laboratory scale by using solvents with different operation conditions (temperature, concentration of solvent, solvent to oil ratio, and duration time). This study investigates the effects of these parameters on asphaltene yield. The results show that an increase in temperature for all solvents increases the extraction of asphaltene yield. The higher reduction in asphaltene content is obtained with hexane solvent at operating conditions of (90 °C, 4/1
... Show MoreDifferent solvents (light naphtha, n-heptane, and n-hexane) are used to treat Iraqi Atmospheric oil residue by the deasphalting process. Oil residue from Al-Dura refinery with specific gravity 0.9705, API 14.9, and 0.5 wt. % sulfur content was used. Deasphalting oil (DAO) was examined on a laboratory scale by using solvents with different operation conditions (temperature, concentration of solvent, solvent to oil ratio, and duration time). This study investigates the effects of these parameters on asphaltene yield. The results show that an increase in temperature for all solvents increases the extraction of asphaltene yield. The higher reduction in asphaltene content is obtained with hexane solvent at operating conditions of (90 °C
... Show MoreSimulation experiments are a means of solving in many fields, and it is the process of designing a model of the real system in order to follow it and identify its behavior through certain models and formulas written according to a repeating software style with a number of iterations. The aim of this study is to build a model that deals with the behavior suffering from the state of (heteroskedasticity) by studying the models (APGARCH & NAGARCH) using (Gaussian) and (Non-Gaussian) distributions for different sample sizes (500,1000,1500,2000) through the stage of time series analysis (identification , estimation, diagnostic checking and prediction). The data was generated using the estimations of the parameters resulting f
... Show MoreGypsum Plaster is an important building materials, and because of the availabilty of its raw materials. In this research the effect of various additives on the properties of plaster was studied , like Polyvinyl Acetate, Furfural, Fumed Silica at different rate of addition and two types of fibers, Carbon Fiber and Polypropylene Fiber to the plaster at a different volumetric rate. It was found that after analysis of the results the use of Furfural as an additive to plaster by 2.5% is the optimum ratio of addition to that it improved the flexural Strength by 3.18%.
When using Polyvinyl Acetate it was found that the ratio of the additive 2% is the optimum ratio of addition to the plaster, because it improved the value of the flexural stre
Electrocardiogram (ECG) is an important physiological signal for cardiac disease diagnosis. With the increasing use of modern electrocardiogram monitoring devices that generate vast amount of data requiring huge storage capacity. In order to decrease storage costs or make ECG signals suitable and ready for transmission through common communication channels, the ECG data
volume must be reduced. So an effective data compression method is required. This paper presents an efficient technique for the compression of ECG signals. In this technique, different transforms have been used to compress the ECG signals. At first, a 1-D ECG data was segmented and aligned to a 2-D data array, then 2-D mixed transform was implemented to compress the
In this paper a decoder of binary BCH code is implemented using a PIC microcontroller for code length n=127 bits with multiple error correction capability, the results are presented for correcting errors up to 13 errors. The Berkelam-Massey decoding algorithm was chosen for its efficiency. The microcontroller PIC18f45k22 was chosen for the implementation and programmed using assembly language to achieve highest performance. This makes the BCH decoder implementable as a low cost module that can be used as a part of larger systems. The performance evaluation is presented in terms of total number of instructions and the bit rate.