We consider the problem of calibrating range measurements of a Light Detection and Ranging (lidar) sensor that is dealing with the sensor nonlinearity and heteroskedastic, range-dependent, measurement error. We solved the calibration problem without using additional hardware, but rather exploiting assumptions on the environment surrounding the sensor during the calibration procedure. More specifically we consider the assumption of calibrating the sensor by placing it in an environment so that its measurements lie in a 2D plane that is parallel to the ground. Then, its measurements come from fixed objects that develop orthogonally w.r.t. the ground, so that they may be considered as fixed points in an inertial reference frame. Moreover, we consider the intuition that moving the distance sensor within this environment implies that its measurements should be such that the relative distances and angles among the fixed points above remain the same. We thus exploit this intuition to cast the sensor calibration problem as making its measurements comply with this assumption that “fixed features shall have fixed relative distances and angles”. The resulting calibration procedure does thus not need to use additional (typically expensive) equipment, nor deploy special hardware. As for the proposed estimation strategies, from a mathematical perspective we consider models that lead to analytically solvable equations, so to enable deployment in embedded systems. Besides proposing the estimators we moreover analyze their statistical performance both in simulation and with field tests. We report the dependency of the MSE performance of the calibration procedure as a function of the sensor noise levels, and observe that in field tests the approach can lead to a tenfold improvement in the accuracy of the raw measurements.
In this research, several estimators concerning the estimation are introduced. These estimators are closely related to the hazard function by using one of the nonparametric methods namely the kernel function for censored data type with varying bandwidth and kernel boundary. Two types of bandwidth are used: local bandwidth and global bandwidth. Moreover, four types of boundary kernel are used namely: Rectangle, Epanechnikov, Biquadratic and Triquadratic and the proposed function was employed with all kernel functions. Two different simulation techniques are also used for two experiments to compare these estimators. In most of the cases, the results have proved that the local bandwidth is the best for all the
... Show MoreThe railways network is one of the huge infrastructure projects. Therefore, dealing with these projects such as analyzing and developing should be done using appropriate tools, i.e. GIS tools. Because, traditional methods will consume resources, time, money and the results maybe not accurate. In this research, the train stations in all of Iraq’s provinces were studied and analyzed using network analysis, which is one of the most powerful techniques within GIS. A free trial copy of ArcGIS®10.2 software was used in this research in order to achieve the aim of this study. The analysis of current train stations has been done depending on the road network, because people used roads to reach those train stations. The data layers for this st
... Show More
It is considered as one of the statistical methods used to describe and estimate the relationship between randomness (Y) and explanatory variables (X). The second is the homogeneity of the variance, in which the dependent variable is a binary response takes two values (One when a specific event occurred and zero when that event did not happen) such as (injured and uninjured, married and unmarried) and that a large number of explanatory variables led to the emergence of the problem of linear multiplicity that makes the estimates inaccurate, and the method of greatest possibility and the method of declination of the letter was used in estimating A double-response logistic regression model by adopting the Jackna
... Show MoreThe distribution of the expanded exponentiated power function EEPF with four parameters, was presented by the exponentiated expanded method using the expanded distribution of the power function, This method is characterized by obtaining a new distribution belonging to the exponential family, as we obtained the survival rate and failure rate function for this distribution, Some mathematical properties were found, then we used the developed least squares method to estimate the parameters using the genetic algorithm, and a Monte Carlo simulation study was conducted to evaluate the performance of estimations of possibility using the Genetic algorithm GA.
The lethality of inorganic arsenic (As) and the threat it poses have made the development of efficient As detection systems a vital necessity. This research work demonstrates a sensing layer made of hydrous ferric oxide (Fe2H2O4) to detect As(III) and As(V) ions in a surface plasmon resonance system. The sensor conceptualizes on the strength of Fe2H2O4 to absorb As ions and the interaction of plasmon resonance towards the changes occurring on the sensing layer. Detection sensitivity values for As(III) and As(V) were 1.083 °·ppb−1 and 0.922 °·ppb
Students’ feedback is crucial for educational institutions to assess the performance of their teachers, most opinions are expressed in their native language, especially for people in south Asian regions. In Pakistan, people use Roman Urdu to express their reviews, and this applied in the education domain where students used Roman Urdu to express their feedback. It is very time-consuming and labor-intensive process to handle qualitative opinions manually. Additionally, it can be difficult to determine sentence semantics in a text that is written in a colloquial style like Roman Urdu. This study proposes an enhanced word embedding technique and investigates the neural word Embedding (Word2Vec and Glove) to determine which perfo
... Show MoreThe support vector machine, also known as SVM, is a type of supervised learning model that can be used for classification or regression depending on the datasets. SVM is used to classify data points by determining the best hyperplane between two or more groups. Working with enormous datasets, on the other hand, might result in a variety of issues, including inefficient accuracy and time-consuming. SVM was updated in this research by applying some non-linear kernel transformations, which are: linear, polynomial, radial basis, and multi-layer kernels. The non-linear SVM classification model was illustrated and summarized in an algorithm using kernel tricks. The proposed method was examined using three simulation datasets with different sample
... Show More