Visual analytics becomes an important approach for discovering patterns in big data. As visualization struggles from high dimensionality of data, issues like concept hierarchy on each dimension add more difficulty and make visualization a prohibitive task. Data cube offers multi-perspective aggregated views of large data sets and has important applications in business and many other areas. It has high dimensionality, concept hierarchy, vast number of cells, and comes with special exploration operations such as roll-up, drill-down, slicing and dicing. All these issues make data cubes very difficult to visually explore. Most existing approaches visualize a data cube in 2D space and require preprocessing steps. In this paper, we propose a visualization technique for visual analytics of data cubes using parallel coordinates. The proposed technique extends parallel coordinates to a 3D space to reflect concept hierarchy …
Today, there are large amounts of geospatial data available on the web such as Google Map (GM), OpenStreetMap (OSM), Flickr service, Wikimapia and others. All of these services called open source geospatial data. Geospatial data from different sources often has variable accuracy due to different data collection methods; therefore data accuracy may not meet the user requirement in varying organization. This paper aims to develop a tool to assess the quality of GM data by comparing it with formal data such as spatial data from Mayoralty of Baghdad (MB). This tool developed by Visual Basic language, and validated on two different study areas in Baghdad / Iraq (Al-Karada and Al- Kadhumiyah). The positional accuracy was asses
... Show MoreEuphemisms are advantageous in people’s social life by turning sensitive into a more acceptable ones so that resentful feelings and embarrassment can be avoided. This study investigates the ability of Iraqi English learners in using euphemistic expressions, meanwhile, raising their awareness and the faculty members in English teaching faculties regarding the relevance of discussing the topics that demand euphemisation. This study comprised three stages: initial test, explicit instruction with activities, and a final test for the students’ development in this domain. A test has been distributed among 50 respondents, who are at the fourth year of their undergraduate study at the University of Babylon/ College of Basic Education. The lo
... Show MoreThe purpose of this article is to introduce reverse engineering procedure (REP). It can achieved by developing an industrial mechanical product that had no design schemes throughout the 3D-Scanners. The aim of getting a geometric CAD model from 3D scanner is to present physical model. Generally, this used in specific applications, like commercial plan and manufacturing tasks. Having a digital data as stereolithography (STL) format. Converting the point cloud be can developed as a work in programming by producing triangles between focuses, a procedure known as triangulation. Then it could be easy to manufacture parts unknown documentation and transferred the information to CNC-machines. In this work, modification was proposed and used in RE
... Show MoreIn this paper, a discussion of the principles of stereoscopy is presented, and the phases
of 3D image production of which is based on the Waterfall model. Also, the results are based
on one of the 3D technology which is Anaglyph and it's known to be of two colors (red and
cyan).
A 3D anaglyph image and visualization technologies will appear as a threedimensional
by using a classes (red/cyan) as considered part of other technologies used and
implemented for production of 3D videos (movies). And by using model to produce a
software to process anaglyph video, comes very important; for that, our proposed work is
implemented an anaglyph in Waterfall model to produced a 3D image which extracted from a
video.
The study includes building a 3-D geological model, which involves get the Petrophysical properties as (porosity, permeability and water saturation). Effective Porosity, water saturation results from log interpretation process and permeability from special correlation using core data and log data. Clay volume can be calculated by six ways using IP software v3.5 the best way was by using gamma Ray. Also, Water Resistivity, flushed zone saturation and bulk volume analysis determined through geological study. Lithology determined in several ways using M-N matrix Identification, Density-Neutron and Sonic-Neutron cross plots. The cut off values are determined by Using EHC (Equivalent Hydra
The issue of penalized regression model has received considerable critical attention to variable selection. It plays an essential role in dealing with high dimensional data. Arctangent denoted by the Atan penalty has been used in both estimation and variable selection as an efficient method recently. However, the Atan penalty is very sensitive to outliers in response to variables or heavy-tailed error distribution. While the least absolute deviation is a good method to get robustness in regression estimation. The specific objective of this research is to propose a robust Atan estimator from combining these two ideas at once. Simulation experiments and real data applications show that the p
... Show MoreMost of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve B
... Show MoreIn data mining, classification is a form of data analysis that can be used to extract models describing important data classes. Two of the well known algorithms used in data mining classification are Backpropagation Neural Network (BNN) and Naïve Bayesian (NB). This paper investigates the performance of these two classification methods using the Car Evaluation dataset. Two models were built for both algorithms and the results were compared. Our experimental results indicated that the BNN classifier yield higher accuracy as compared to the NB classifier but it is less efficient because it is time-consuming and difficult to analyze due to its black-box implementation.
Reliable data transfer and energy efficiency are the essential considerations for network performance in resource-constrained underwater environments. One of the efficient approaches for data routing in underwater wireless sensor networks (UWSNs) is clustering, in which the data packets are transferred from sensor nodes to the cluster head (CH). Data packets are then forwarded to a sink node in a single or multiple hops manners, which can possibly increase energy depletion of the CH as compared to other nodes. While several mechanisms have been proposed for cluster formation and CH selection to ensure efficient delivery of data packets, less attention has been given to massive data co