We consider the problem of calibrating range measurements of a Light Detection and Ranging (lidar) sensor that is dealing with the sensor nonlinearity and heteroskedastic, range-dependent, measurement error. We solved the calibration problem without using additional hardware, but rather exploiting assumptions on the environment surrounding the sensor during the calibration procedure. More specifically we consider the assumption of calibrating the sensor by placing it in an environment so that its measurements lie in a 2D plane that is parallel to the ground. Then, its measurements come from fixed objects that develop orthogonally w.r.t. the ground, so that they may be considered as fixed points in an inertial reference frame. Moreover, we consider the intuition that moving the distance sensor within this environment implies that its measurements should be such that the relative distances and angles among the fixed points above remain the same. We thus exploit this intuition to cast the sensor calibration problem as making its measurements comply with this assumption that “fixed features shall have fixed relative distances and angles”. The resulting calibration procedure does thus not need to use additional (typically expensive) equipment, nor deploy special hardware. As for the proposed estimation strategies, from a mathematical perspective we consider models that lead to analytically solvable equations, so to enable deployment in embedded systems. Besides proposing the estimators we moreover analyze their statistical performance both in simulation and with field tests. We report the dependency of the MSE performance of the calibration procedure as a function of the sensor noise levels, and observe that in field tests the approach can lead to a tenfold improvement in the accuracy of the raw measurements.
Accurate emotion categorization is an important and challenging task in computer vision and image processing fields. Facial emotion recognition system implies three important stages: Prep-processing and face area allocation, feature extraction and classification. In this study a new system based on geometric features (distances and angles) set derived from the basic facial components such as eyes, eyebrows and mouth using analytical geometry calculations. For classification stage feed forward neural network classifier is used. For evaluation purpose the Standard database "JAFFE" have been used as test material; it holds face samples for seven basic emotions. The results of conducted tests indicate that the use of suggested distances, angles
... Show More<span>Digital audio is required to transmit large sizes of audio information through the most common communication systems; in turn this leads to more challenges in both storage and archieving. In this paper, an efficient audio compressive scheme is proposed, it depends on combined transform coding scheme; it is consist of i) bi-orthogonal (tab 9/7) wavelet transform to decompose the audio signal into low & multi high sub-bands, ii) then the produced sub-bands passed through DCT to de-correlate the signal, iii) the product of the combined transform stage is passed through progressive hierarchical quantization, then traditional run-length encoding (RLE), iv) and finally LZW coding to generate the output mate bitstream.
... Show MoreThere are many images you need to large Khoznah space With the continued evolution of storage technology for computers, there is a need nailed required to reduce Alkhoznip space for pictures and image compression in a good way, the conversion method Alamueja
The major objective of this study is to establish a network of Ground Control Points-GCPs which can use it as a reference for any engineering project. Total Station (type: Nikon Nivo 5.C), Optical Level and Garmin Navigator GPS were used to perform traversing. Traversing measurement was achieved by using nine points covered the selected area irregularly. Near Civil Engineering Department at Baghdad University Al-jadiriya, an attempt has been made to assess the accuracy of GPS by comparing the data obtained from the Total Station. The average error of this method is 3.326 m with the highest coefficient of determination (R2) is 0.077 m observed in Northing. While in
Optical Mark Recognition (OMR) is an important technology for applications that require speedy, high-accuracy processing of a huge volume of hand-filled forms. The aim of this technology is to reduce manual work, human effort, high accuracy in assessment, and minimize time for evaluation answer sheets. This paper proposed OMR by using Modify Bidirectional Associative Memory (MBAM), MBAM has two phases (learning and analysis phases), it will learn on the answer sheets that contain the correct answers by giving its own code that represents the number of correct answers, then detection marks from answer sheets by using analysis phase. This proposal will be able to detect no selection or select more than one choice, in addition, using M
... Show MoreIn this paper, a computational method for solving optimal problem is presented, using indirect method (spectral methodtechnique) which is based on Boubaker polynomial. By this method the state and the adjoint variables are approximated by Boubaker polynomial with unknown coefficients, thus an optimal control problem is transformed to algebraic equations which can be solved easily, and then the numerical value of the performance index is obtained. Also the operational matrices of differentiation and integration have been deduced for the same polynomial to help solving the problems easier. A numerical example was given to show the applicability and efficiency of the method. Some characteristics of this polynomial which can be used for solvin
... Show MoreAbstract :
The study aims at building a mathematical model for the aggregate production planning for Baghdad soft drinks company. The study is based on a set of aggregate planning strategies (Control of working hours, storage level control strategy) for the purpose of exploiting the resources and productive capacities available in an optimal manner and minimizing production costs by using (Matlab) program. The most important finding of the research is the importance of exploiting during the available time of production capacity. In the months when the demand is less than the production capacity available for investment. In the subsequent months when the demand exceeds the available energy and to minimize the use of overti
... Show MoreThe integration of decision-making will lead to the robust of its decisions, and then determination optimum inventory level to the required materials to produce and reduce the total cost by the cooperation of purchasing department with inventory department and also with other company,s departments. Two models are suggested to determine Optimum Inventory Level (OIL), the first model (OIL-model 1) assumed that the inventory level for materials quantities equal to the required materials, while the second model (OIL-model 2) assumed that the inventory level for materials quantities more than the required materials for the next period. &nb
... Show More