We consider the problem of calibrating range measurements of a Light Detection and Ranging (lidar) sensor that is dealing with the sensor nonlinearity and heteroskedastic, range-dependent, measurement error. We solved the calibration problem without using additional hardware, but rather exploiting assumptions on the environment surrounding the sensor during the calibration procedure. More specifically we consider the assumption of calibrating the sensor by placing it in an environment so that its measurements lie in a 2D plane that is parallel to the ground. Then, its measurements come from fixed objects that develop orthogonally w.r.t. the ground, so that they may be considered as fixed points in an inertial reference frame. Moreover, we consider the intuition that moving the distance sensor within this environment implies that its measurements should be such that the relative distances and angles among the fixed points above remain the same. We thus exploit this intuition to cast the sensor calibration problem as making its measurements comply with this assumption that “fixed features shall have fixed relative distances and angles”. The resulting calibration procedure does thus not need to use additional (typically expensive) equipment, nor deploy special hardware. As for the proposed estimation strategies, from a mathematical perspective we consider models that lead to analytically solvable equations, so to enable deployment in embedded systems. Besides proposing the estimators we moreover analyze their statistical performance both in simulation and with field tests. We report the dependency of the MSE performance of the calibration procedure as a function of the sensor noise levels, and observe that in field tests the approach can lead to a tenfold improvement in the accuracy of the raw measurements.
This dissertation depends on study of the topological structure in graph theory as well as introduce some concerning concepts, and generalization them into new topological spaces constructed using elements of graph. Thus, it is required presenting some theorems, propositions, and corollaries that are available in resources and proof which are not available. Moreover, studying some relationships between many concepts and examining their equivalence property like locally connectedness, convexity, intervals, and compactness. In addition, introducing the concepts of weaker separation axioms in α-topological spaces than the standard once like, α-feebly Hausdorff, α-feebly regular, and α-feebly normal and studying their properties. Furthermor
... Show MoreFace Identification is an important research topic in the field of computer vision and pattern recognition and has become a very active research area in recent decades. Recently multiwavelet-based neural networks (multiwavenets) have been used for function approximation and recognition, but to our best knowledge it has not been used for face Identification. This paper presents a novel approach for the Identification of human faces using Back-Propagation Adaptive Multiwavenet. The proposed multiwavenet has a structure similar to a multilayer perceptron (MLP) neural network with three layers, but the activation function of hidden layer is replaced with multiscaling functions. In experiments performed on the ORL face database it achieved a
... Show MoreDocument analysis of images snapped by camera is a growing challenge. These photos are often poor-quality compound images, composed of various objects and text; this makes automatic analysis complicated. OCR is one of the image processing techniques which is used to perform automatic identification of texts. Existing image processing techniques need to manage many parameters in order to clearly recognize the text in such pictures. Segmentation is regarded one of these essential parameters. This paper discusses the accuracy of segmentation process and its effect over the recognition process. According to the proposed method, the images were firstly filtered using the wiener filter then the active contour algorithm could b
... Show MoreIn this paper we find the exact solution of Burger's equation after reducing it to Bernoulli equation. We compare this solution with that given by Kaya where he used Adomian decomposition method, the solution given by chakrone where he used the Variation iteration method (VIM)and the solution given by Eq(5)in the paper of M. Javidi. We notice that our solution is better than their solutions.
The proliferation of cellular network enabled users through various positioning tools to track locations, location information is being continuously captured from mobile phones, created a prototype that enables detected location based on using the two invariant models for Global Systems for Mobile (GSM) and Universal Mobile Telecommunications System (UMTS). The smartphone application on an Android platform applies the location sensing run as a background process and the localization method is based on cell phones. The proposed application is associated with remote server and used to track a smartphone without permissions and internet. Mobile stored data location information in the database (SQLite), then transfer it into location AP
... Show MoreIn cognitive radio networks, there are two important probabilities; the first probability is important to primary users called probability of detection as it indicates their protection level from secondary users, and the second probability is important to the secondary users called probability of false alarm which is used for determining their using of unoccupied channel. Cooperation sensing can improve the probabilities of detection and false alarm. A new approach of determine optimal value for these probabilities, is supposed and considered to face multi secondary users through discovering an optimal threshold value for each unique detection curve then jointly find the optimal thresholds. To get the aggregated throughput over transmission
... Show MoreIndustrial effluents loaded with heavy metals are a cause of hazards to the humans and other forms of life. Conventional approaches, such as electroplating, ion exchange, and membrane processes, are used for removal of copper, cadmium, and lead and are often cost prohibitive with low efficiency at low metal ion concentration. Biosorption can be considered as an option which has been proven as more efficient and economical for removing the mentioned metal ions. Biosorbents used are fungi, yeasts, oil palm shells, coir pith carbon, peanut husks, and olive pulp. Recently, low cost and natural products have also been researched as biosorbent. This paper presents an attempt of the potential use of Iraqi date pits and Al-Khriet (i.e. substances l
... Show MoreAn approach is depended in the recent years to distinguish any author or writer from other by analyzing his writings or essays. This is done by analyzing the syllables of writings of an author. The syllable is composed of two letters; therefore the words of the writing are fragmented to syllables and extract the most frequency syllables to become trait of that author. The research work depend on analyzed the frequency syllables in two cases, the first, when there is a space between the words, the second, when these spaces are ignored. The results is obtained from a program which scan the syllables in the text file, the performance is best in the first case since the sequence of the selected syllables is higher than the same syllables in
... Show MoreIn this research Artificial Neural Network (ANN) technique was applied to study the filtration process in water treatment. Eight models have been developed and tested using data from a pilot filtration plant, working under different process design criteria; influent turbidity, bed depth, grain size, filtration rate and running time (length of the filtration run), recording effluent turbidity and head losses. The ANN models were constructed for the prediction of different performance criteria in the filtration process: effluent turbidity, head losses and running time. The results indicate that it is quite possible to use artificial neural networks in predicting effluent turbidity, head losses and running time in the filtration process, wi
... Show More