One wide-ranging category of open source data is that referring to geospatial information web sites. Despite the advantages of such open source data, including ease of access and cost free data, there is a potential issue of its quality. This article tests the horizontal positional accuracy and possible integration of four web-derived geospatial datasets: OpenStreetMap (OSM), Google Map, Google Earth and Wikimapia. The evaluation was achieved by combining the tested information with reference field survey data for fifty road intersections in Baghdad, Iraq. The results indicate that the free geospatial data can be used to enhance authoritative maps especially small scale maps.
OpenStreetMap (OSM) represents the most common example of online volunteered mapping applications. Most of these platforms are open source spatial data collected by non-experts volunteers using different data collection methods. OSM project aims to provide a free digital map for all the world. The heterogeneity in data collection methods made OSM project databases accuracy is unreliable and must be dealt with caution for any engineering application. This study aims to assess the horizontal positional accuracy of three spatial data sources are OSM road network database, high-resolution Satellite Image (SI), and high-resolution Aerial Photo (AP) of Baghdad city with respect to an analogue formal road network dataset obtain
... Show MoreThis study aims to estimate the accuracy of digital elevation models (DEM) which are created with exploitation of open source Google Earth data and comparing with the widely available DEM datasets, Shuttle Radar Topography Mission (SRTM), version 3, and Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM), version 2. The GPS technique is used in this study to produce digital elevation raster with a high level of accuracy, as reference raster, compared to the DEM datasets. Baghdad University, Al Jadriya campus, is selected as a study area. Besides, 151 reference points were created within the study area to evaluate the results based on the values of RMS.Furthermore, th
... Show MoreThe aim of this paper is to present a weak form of -light functions by using -open set which is -light function, and to offer new concepts of disconnected spaces and totally disconnected spaces. The relation between them have been studied. Also, a new form of -totally disconnected and inversely -totally disconnected function have been defined, some examples and facts was submitted.
Digital Elevation Model (DEM) is one of the developed techniques for relief representation. The definition of a DEM construction is the modeling technique of earth surface from existing data. DEM plays a role as one of the fundamental information requirement that has been generally utilized in GIS data structures. The main aim of this research is to present a methodology for assessing DEMs generation methods. The DEMs data will be extracted from open source data e.g. Google Earth. The tested data will be compared with data produced from formal institutions such as General Directorate of Surveying. The study area has been chosen in south of Iraq (Al-Gharraf / Dhi Qar governorate. The methods of DEMs creation are kri
... Show MoreDigital Elevation Model (DEM) is one of the developed techniques for relief representation. The definition of a DEM construction is the modeling technique of earth surface from existing data. DEM plays a role as one of the fundamental information requirement that has been generally utilized in GIS data structures. The main aim of this research is to present a methodology for assessing DEMs generation methods. The DEMs data will be extracted from open source data e.g. Google Earth. The tested data will be compared with data produced from formal institutions such as General Directorate of Surveying. The study area has been chosen in south of Iraq (Al-Gharraf / Dhi Qar governorate. The methods of DEMs creation are kriging, IDW (inver
... Show MoreDatabase is characterized as an arrangement of data that is sorted out and disseminated in a way that allows the client to get to the data being put away in a simple and more helpful way. However, in the era of big-data the traditional methods of data analytics may not be able to manage and process the large amount of data. In order to develop an efficient way of handling big-data, this work studies the use of Map-Reduce technique to handle big-data distributed on the cloud. This approach was evaluated using Hadoop server and applied on EEG Big-data as a case study. The proposed approach showed clear enhancement for managing and processing the EEG Big-data with average of 50% reduction on response time. The obtained results provide EEG r
... Show MoreThis research develops a new method based on spectral indices and random forest classifier to detect paddy rice areas and then assess their distributions regarding to urban areas. The classification will be conducted on Landsat OLI images and Landsat OLI/Sentinel 1 SAR data. Consequently, developing a new spectral index by analyzing the relative importance of Landsat bands will be calculated by the random forest. The new spectral index has improved depending on the most three important bands, then two additional indices including the normalized difference vegetation index (NDVI), and standardized difference built-up index (NDBI) have been used to extract paddy rice fields from the data. Several experiments being
... Show MoreThis research describes a new model inspired by Mobilenetv2 that was trained on a very diverse dataset. The goal is to enable fire detection in open areas to replace physical sensor-based fire detectors and reduce false alarms of fires, to achieve the lowest losses in open areas via deep learning. A diverse fire dataset was created that combines images and videos from several sources. In addition, another self-made data set was taken from the farms of the holy shrine of Al-Hussainiya in the city of Karbala. After that, the model was trained with the collected dataset. The test accuracy of the fire dataset that was trained with the new model reached 98.87%.
This article describes how to predict different types of multiple reflections in pre-track seismic data. The characteristics of multiple reflections can be expressed as a combination of the characteristics of primary reflections. Multiple velocities always come in lower magnitude than the primaries, this is the base for separating them during Normal Move Out correction. The muting procedure is applied in Time-Velocity analysis domain. Semblance plot is used to diagnose multiples availability and judgment for muting dimensions. This processing procedure is used to eliminate internal multiples from real 2D seismic data from southern Iraq in two stages. The first is conventional Normal Move Out correction and velocity auto picking and
... Show More