Today, problems of spatial data integration have been further complicated by the rapid development in communication technologies and the increasing amount of available data sources on the World Wide Web. Thus, web-based geospatial data sources can be managed by different communities and the data themselves can vary in respect to quality, coverage, and purpose. Integrating such multiple geospatial datasets remains a challenge for geospatial data consumers. This paper concentrates on the integration of geometric and classification schemes for official data, such as Ordnance Survey (OS) national mapping data, with volunteered geographic information (VGI) data, such as the data derived from the OpenStreetMap (OSM) project. Useful descriptions of geometric accuracy assessment (positional accuracy and shape fidelity) have been obtained. Semantic similarity testing covered feature classification, in effect comparing possible categories (legend classes) and actual attributes attached to features. The model involves ‘tokenization’to search for common roots of words, and the feature classifications have been modelled as an XML schema labelled rooted tree for hierarchical analysis. The semantic similarity was measured using the WordNet:: Similarity package. Among several proposed semantic similarity methods in WordNet:: Similarity, the Lin approach has been adopted to give normalised comparison scores. The results reveal poor correspondence in the geometric and semantics integration of OS and OSM.
Big data analysis is essential for modern applications in areas such as healthcare, assistive technology, intelligent transportation, environment and climate monitoring. Traditional algorithms in data mining and machine learning do not scale well with data size. Mining and learning from big data need time and memory efficient techniques, albeit the cost of possible loss in accuracy. We have developed a data aggregation structure to summarize data with large number of instances and data generated from multiple data sources. Data are aggregated at multiple resolutions and resolution provides a trade-off between efficiency and accuracy. The structure is built once, updated incrementally, and serves as a common data input for multiple mining an
... Show MoreIn this study, a preliminary economic feasibility study of the project of wind power at the site of Al-Shehabi (Wasit-Iraq) was conducted using measured wind data at altitudes of 10, 30, 50 and 52 m per 10 minutes. For the purpose of comparison, data from NASA were used at the same location at 50 m height. The lowest unit cost of electricity from wind energy was found to be 0.028 $/Kwh and 0.0399 $/Kwh by using the standard methodologies of Levelized Cost of Energy (LCOE) equation and Net Present Value (NPV) procedure, respectively. Furthermore, RETScreen software was used to perform the economic prefeasibility study of a proposed wind farm. The study concludes that this site is economically feasible if a wind fa
... Show MoreSpot panchromatic satellite image had been employed to study and know the difference Between ground and satellite data( DN ,its values varies from 0-255) where it is necessary to convert these DN values to absolute radiance values through special equations ,later it converted to spectral reflectance values .In this study a monitoring of the environmental effect resulted from throwing the sewage drainages pollutants (industrial and home) into the Tigris river water in Mosul, was achieved, which have an effect mostly on physical characters specially color and turbidity which lead to the variation in Spectral Reflectance of the river water ,and it could be detected by using many remote sensing techniques. The contaminated areas within th
... Show MoreVideo streaming is widely available nowadays. Moreover, since the pandemic hit all across the globe, many people stayed home and used streaming services for news, education, and entertainment. However, when streaming in session, user Quality of Experience (QoE) is unsatisfied with the video content selection while streaming on smartphone devices. Users are often irritated by unpredictable video quality format displays on their smartphone devices. In this paper, we proposed a framework video selection scheme that targets to increase QoE user satisfaction. We used a video content selection algorithm to map the video selection that satisfies the user the most regarding streaming quality. Video Content Selection (VCS) are classified in
... Show MoreDifferent ANN architectures of MLP have been trained by BP and used to analyze Landsat TM images. Two different approaches have been applied for training: an ordinary approach (for one hidden layer M-H1-L & two hidden layers M-H1-H2-L) and one-against-all strategy (for one hidden layer (M-H1-1)xL, & two hidden layers (M-H1-H2-1)xL). Classification accuracy up to 90% has been achieved using one-against-all strategy with two hidden layers architecture. The performance of one-against-all approach is slightly better than the ordinary approach
Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the origin
... Show MoreBig data of different types, such as texts and images, are rapidly generated from the internet and other applications. Dealing with this data using traditional methods is not practical since it is available in various sizes, types, and processing speed requirements. Therefore, data analytics has become an important tool because only meaningful information is analyzed and extracted, which makes it essential for big data applications to analyze and extract useful information. This paper presents several innovative methods that use data analytics techniques to improve the analysis process and data management. Furthermore, this paper discusses how the revolution of data analytics based on artificial intelligence algorithms might provide
... Show MoreIn recent years, the performance of Spatial Data Infrastructures for governments and companies is a task that has gained ample attention. Different categories of geospatial data such as digital maps, coordinates, web maps, aerial and satellite images, etc., are required to realize the geospatial data components of Spatial Data Infrastructures. In general, there are two distinct types of geospatial data sources exist over the Internet: formal and informal data sources. Despite the growth of informal geospatial data sources, the integration between different free sources is not being achieved effectively. The adoption of this task can be considered the main advantage of this research. This article addresses the research question of how the
... Show More