OpenStreetMap (OSM), recognised for its current and readily accessible spatial database, frequently serves regions lacking precise data at the necessary granularity. Global collaboration among OSM contributors presents challenges to data quality and uniformity, exacerbated by the sheer volume of input and indistinct data annotation protocols. This study presents a methodological improvement in the spatial accuracy of OSM datasets centred over Baghdad, Iraq, utilising data derived from OSM services and satellite imagery. An analytical focus was placed on two geometric correction methods: a two-dimensional polynomial affine transformation and a two-dimensional polynomial conformal transformation. The former involves twelve coefficients for adjustment, while the latter encompasses six. Analysis within the selected region exposed variances in positional accuracy, with distinctions evident between Easting (E) and Northing (N) coordinates. Empirical results indicated that the conformal transformation method reduced the Root Mean Square Error (RMSE) by 4.434 meters in the amended OSM data. Contrastingly, the affine transformation method exhibited a further reduction in total RMSE by 4.053 meters. The deployment of these proposed techniques substantiates a marked enhancement in the geometric fidelity of OSM data. The refined datasets have significant applications, extending to the representation of roadmaps, the analysis of traffic flow, and the facilitation of urban planning initiatives.
These days, it is crucial to discern between different types of human behavior, and artificial intelligence techniques play a big part in that. The characteristics of the feedforward artificial neural network (FANN) algorithm and the genetic algorithm have been combined to create an important working mechanism that aids in this field. The proposed system can be used for essential tasks in life, such as analysis, automation, control, recognition, and other tasks. Crossover and mutation are the two primary mechanisms used by the genetic algorithm in the proposed system to replace the back propagation process in ANN. While the feedforward artificial neural network technique is focused on input processing, this should be based on the proce
... Show MoreThe physical and elastic characteristics of rocks determine rock strengths in general. Rock strength is frequently assessed using porosity well logs such as neutron and sonic logs. The essential criteria for estimating rock mechanic parameters in petroleum engineering research are uniaxial compressive strength and elastic modulus. Indirect estimation using well-log data is necessary to measure these variables. This study attempts to create a single regression model that can accurately forecast rock mechanic characteristics for the Harth Carbonate Formation in the Fauqi oil field. According to the findings of this study, petrophysical parameters are reliable indexes for determining rock mechanical properties having good performance p
... Show MoreIn recent years, the Global Navigation Satellite Services (GNSS) technology has been frequently employed for monitoring the Earth crust deformation and movement. Such applications necessitate high positional accuracy that can be achieved through processing GPS/GNSS data with scientific software such as BERENSE, GAMIT, and GIPSY-OSIS. Nevertheless, these scientific softwares are sophisticated and have not been published as free open source software. Therefore, this study has been conducted to evaluate an alternative solution, GNSS online processing services, which may obtain this privilege freely. In this study, eight years of GNSS raw data for TEHN station, which located in Iran, have been downloaded from UNAVCO website
... Show MoreEthnographic research is perhaps the most common applicable type of qualitative research method in psychology and medicine. In ethnography studies, the researcher immerses himself in the environment of participants to understand the cultures, challenges, motivations, and topics that arise between them by investigating the environment directly. This type of research method can last for a few days to a few years because it involves in-depth monitoring and data collection based on these foundations. For this reason, the findings of the current study stimuli the researchers in psychology and medicine to conduct studies by applying ethnographic research method to investigate the common cultural patterns language, thinking, beliefs, and behavior
... Show MorePhenol is one of the worst-damaging organic pollutants, and it produces a variety of very poisonous organic intermediates, thus it is important to find efficient ways to eliminate it. One of the promising techniques is sonoelectrochemical processing. However, the type of electrodes, removal efficiency, and process cost are the biggest challenges. The main goal of the present study is to investigate the removal of phenol by a sonoelectrochemical process with different anodes, such as graphite, stainless steel, and titanium. The best anode performance was optimized by using the Taguchi approach with an L16 orthogonal array. the degradation of phenol sonoelectrochemically was investigated with three process parameters: current de
... Show MoreThe research dealt with the analysis of the relations between the GDP of the agricultural sector in Iraq, oil prices, the exchange rate and the GDP both on the short term and long term. The research adopted data analysis for the period from 1980-2019 using the ARDL model. the results indicate the existence of long-term relationships between oil prices and the prices of each agricultural commodity at a significance level of 5%. Also, oil prices have a negative consequence on agricultural production in Iraq, and the Iraqi economy is a rentier economy that depends mainly on oil as a source of income and budget financing.
We are used Bayes estimators for unknown scale parameter when shape Parameter is known of Erlang distribution. Assuming different informative priors for unknown scale parameter. We derived The posterior density with posterior mean and posterior variance using different informative priors for unknown scale parameter which are the inverse exponential distribution, the inverse chi-square distribution, the inverse Gamma distribution, and the standard Levy distribution as prior. And we derived Bayes estimators based on the general entropy loss function (GELF) is used the Simulation method to obtain the results. we generated different cases for the parameters of the Erlang model, for different sample sizes. The estimates have been comp
... Show MoreThe idea of carrying out research on incomplete data came from the circumstances of our dear country and the horrors of war, which resulted in the missing of many important data and in all aspects of economic, natural, health, scientific life, etc.,. The reasons for the missing are different, including what is outside the will of the concerned or be the will of the concerned, which is planned for that because of the cost or risk or because of the lack of possibilities for inspection. The missing data in this study were processed using Principal Component Analysis and self-organizing map methods using simulation. The variables of child health and variables affecting children's health were taken into account: breastfeed
... Show MoreThis study aimed to investigate the role of Big Data in forecasting corporate bankruptcy and that is through a field analysis in the Saudi business environment, to test that relationship. The study found: that Big Data is a recently used variable in the business context and has multiple accounting effects and benefits. Among the benefits is forecasting and disclosing corporate financial failures and bankruptcies, which is based on three main elements for reporting and disclosing that, these elements are the firms’ internal control system, the external auditing, and financial analysts' forecasts. The study recommends: Since the greatest risk of Big Data is the slow adaptation of accountants and auditors to these technologies, wh
... Show MoreThis research sought to present a concept of cross-sectional data models, A crucial double data to take the impact of the change in time and obtained from the measured phenomenon of repeated observations in different time periods, Where the models of the panel data were defined by different types of fixed , random and mixed, and Comparing them by studying and analyzing the mathematical relationship between the influence of time with a set of basic variables Which are the main axes on which the research is based and is represented by the monthly revenue of the working individual and the profits it generates, which represents the variable response And its relationship to a set of explanatory variables represented by the
... Show More