Today, problems of spatial data integration have been further complicated by the rapid development in communication technologies and the increasing amount of available data sources on the World Wide Web. Thus, web-based geospatial data sources can be managed by different communities and the data themselves can vary in respect to quality, coverage, and purpose. Integrating such multiple geospatial datasets remains a challenge for geospatial data consumers. This paper concentrates on the integration of geometric and classification schemes for official data, such as Ordnance Survey (OS) national mapping data, with volunteered geographic information (VGI) data, such as the data derived from the OpenStreetMap (OSM) project. Useful descriptions of geometric accuracy assessment (positional accuracy and shape fidelity) have been obtained. Semantic similarity testing covered feature classification, in effect comparing possible categories (legend classes) and actual attributes attached to features. The model involves ‘tokenization’to search for common roots of words, and the feature classifications have been modelled as an XML schema labelled rooted tree for hierarchical analysis. The semantic similarity was measured using the WordNet:: Similarity package. Among several proposed semantic similarity methods in WordNet:: Similarity, the Lin approach has been adopted to give normalised comparison scores. The results reveal poor correspondence in the geometric and semantics integration of OS and OSM.
There are many images you need to large Khoznah space With the continued evolution of storage technology for computers, there is a need nailed required to reduce Alkhoznip space for pictures and image compression in a good way, the conversion method Alamueja
Due to the easily access to the satellite images, Google Earth (GE) images have become more popular than other online virtual globes. However, the popularity of GE is not an indication of its accuracy. A considerable amount of literature has been published on evaluating the positional accuracy of GE data; however there are few studies which have investigated the subject of improving the GE accuracy. In this paper, a practical method for enhancing the horizontal positional accuracy of GE is suggested by establishing ten reference points, in University of Baghdad main campus, using different Global Navigation Satellite System (GNSS) observation techniques: Rapid Static, Post-Processing Kinematic, and Network. Then, the GE image for the study
... Show MoreThis article explores the process of VGI collection by assessing the relative usability and accuracy of a range of different methods (Smartphone GPS, Tablet, and analogue maps) for data collection amongst different demographic and educational groups, and in different geographical contexts. Assessments are made of positional accuracy, completeness, and data collectors’ experiences with reference to the official cadastral data and the administration system in a case-study region of Iraq. Ownership data was validated by crowd agreement. The result shows that successful VGI projects have access to varying data collection methods.
Energy savings are very common in IoT sensor networks because IoT sensor nodes operate with their own limited battery. The data transmission in the IoT sensor nodes is very costly and consume much of the energy while the energy usage for data processing is considerably lower. There are several energy-saving strategies and principles, mainly dedicated to reducing the transmission of data. Therefore, with minimizing data transfers in IoT sensor networks, can conserve a considerable amount of energy. In this research, a Compression-Based Data Reduction (CBDR) technique was suggested which works in the level of IoT sensor nodes. The CBDR includes two stages of compression, a lossy SAX Quantization stage which reduces the dynamic range of the
... Show MoreThe current study aims to compare between the assessments of the Rush model’s parameters to the missing and completed data in various ways of processing the missing data. To achieve the aim of the present study, the researcher followed the following steps: preparing Philip Carter test for the spatial capacity which consists of (20) items on a group of (250) sixth scientific stage students in the directorates of Baghdad Education at Al–Rusafa (1st, 2nd and 3rd) for the academic year (2018-2019). Then, the researcher relied on a single-parameter model to analyze the data. The researcher used Bilog-mg3 model to check the hypotheses, data and match them with the model. In addition
... Show MoreTwitter data analysis is an emerging field of research that utilizes data collected from Twitter to address many issues such as disaster response, sentiment analysis, and demographic studies. The success of data analysis relies on collecting accurate and representative data of the studied group or phenomena to get the best results. Various twitter analysis applications rely on collecting the locations of the users sending the tweets, but this information is not always available. There are several attempts at estimating location based aspects of a tweet. However, there is a lack of attempts on investigating the data collection methods that are focused on location. In this paper, we investigate the two methods for obtaining location-based dat
... Show MoreTraffic classification is referred to as the task of categorizing traffic flows into application-aware classes such as chats, streaming, VoIP, etc. Most systems of network traffic identification are based on features. These features may be static signatures, port numbers, statistical characteristics, and so on. Current methods of data flow classification are effective, they still lack new inventive approaches to meet the needs of vital points such as real-time traffic classification, low power consumption, ), Central Processing Unit (CPU) utilization, etc. Our novel Fast Deep Packet Header Inspection (FDPHI) traffic classification proposal employs 1 Dimension Convolution Neural Network (1D-CNN) to automatically learn more representational c
... Show MoreABSTRUCT
In This Paper, some semi- parametric spatial models were estimated, these models are, the semi – parametric spatial error model (SPSEM), which suffer from the problem of spatial errors dependence, and the semi – parametric spatial auto regressive model (SPSAR). Where the method of maximum likelihood was used in estimating the parameter of spatial error ( λ ) in the model (SPSEM), estimated the parameter of spatial dependence ( ρ ) in the model ( SPSAR ), and using the non-parametric method in estimating the smoothing function m(x) for these two models, these non-parametric methods are; the local linear estimator (LLE) which require finding the smoo
... Show More