Getting knowledge from raw data has delivered beneficial information in several domains. The prevalent utilizing of social media produced extraordinary quantities of social information. Simply, social media delivers an available podium for employers for sharing information. Data Mining has ability to present applicable designs that can be useful for employers, commercial, and customers. Data of social media are strident, massive, formless, and dynamic in the natural case, so modern encounters grow. Investigation methods of data mining utilized via social networks is the purpose of the study, accepting investigation plans on the basis of criteria, and by selecting a number of papers to serve as the foundation for this article. Afterward a watchful evaluation of these papers, it has beeniscovered that numerous data extraction approaches were utilized with social media data to report a number of various research goals in several fields of industrial and service. Though, implementations of data mining are still raw and require more work via industry and academic world to prepare the work sufficiently. Bring this analysis to a close. Data mining is the most important rule for uncovering hidden data in large datasets, especially in social network analysis, and it demonstrates the most important social media technology.
This study aims to estimate the accuracy of digital elevation models (DEM) which are created with exploitation of open source Google Earth data and comparing with the widely available DEM datasets, Shuttle Radar Topography Mission (SRTM), version 3, and Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM), version 2. The GPS technique is used in this study to produce digital elevation raster with a high level of accuracy, as reference raster, compared to the DEM datasets. Baghdad University, Al Jadriya campus, is selected as a study area. Besides, 151 reference points were created within the study area to evaluate the results based on the values of RMS.Furthermore, th
... Show MoreABSTRUCT
In This Paper, some semi- parametric spatial models were estimated, these models are, the semi – parametric spatial error model (SPSEM), which suffer from the problem of spatial errors dependence, and the semi – parametric spatial auto regressive model (SPSAR). Where the method of maximum likelihood was used in estimating the parameter of spatial error ( λ ) in the model (SPSEM), estimated the parameter of spatial dependence ( ρ ) in the model ( SPSAR ), and using the non-parametric method in estimating the smoothing function m(x) for these two models, these non-parametric methods are; the local linear estimator (LLE) which require finding the smoo
... Show MoreThe main objective of resources management is to supply and support the site operation with necessary resources in a way to achieve the required timing in handing over the work as well as to achieve the cost-realism within the budget estimated. The research aims to know the advantage of using GIS in management of resources as one of the new tools that keep pace with the evolution in various countries around the world also collect the vast amount of spatial data resources in one environment easily to handled and accessed quickly and this help to make the right decision regarding management of resources in various construction projects. The process of using GIS in the management and identification of resources is of extreme importance in t
... Show MoreRoaming data is an important source of information about the political and social activities of a country. And this is true for Iraq situation after 2003 when the mobile companies started their business. In this paper, data of subscribers roamed onto foreign networks (inbound roamers) is collected; it consists of the name of the Radio Control Point, the counter dealing with this type of information, Mobile Network Code/Mobile Country Code tupel. This data is processed. Results obtained out of this process show the classification of the inbound roamers (according to their countries) during the occupation period (2008-2009). These results reflect the political situation of Iraq at that time. Information resulted from this proc
... Show MoreThe region-based association analysis has been proposed to capture the collective behavior of sets of variants by testing the association of each set instead of individual variants with the disease. Such an analysis typically involves a list of unphased multiple-locus genotypes with potentially sparse frequencies in cases and controls. To tackle the problem of the sparse distribution, a two-stage approach was proposed in literature: In the first stage, haplotypes are computationally inferred from genotypes, followed by a haplotype coclassification. In the second stage, the association analysis is performed on the inferred haplotype groups. If a haplotype is unevenly distributed between the case and control samples, this haplotype is labeled
... Show MoreCarbon monoxide (CO) plays an important indirect greenhouse gases due to its influences on the budgets of hydroxyl radicals (OH) and Ozone (O3). The atmospheric carbon monoxide (CO) observations can only be made on global and continental scales by remote sensing instruments situated in space. One of instrument is the Measurements of Pollution in the Troposphere (MOPITT), which is designed to measure troposphere CO and CH4 by use of a nadir-viewing geometry and was launched aboard the Earth Observing System (EOS) Terra spacecraft on 18 December 1999. Results from the analysis of the retrieved monthly (1ºх1º) spatial grid resolution, from the MOPITT data were utilized to analyze the distribution of CO surface mixing ratio in Iraq for th
... Show MoreWith the development of communication technologies for mobile devices and electronic communications, and went to the world of e-government, e-commerce and e-banking. It became necessary to control these activities from exposure to intrusion or misuse and to provide protection to them, so it's important to design powerful and efficient systems-do-this-purpose. It this paper it has been used several varieties of algorithm selection passive immune algorithm selection passive with real values, algorithm selection with passive detectors with a radius fixed, algorithm selection with passive detectors, variable- sized intrusion detection network type misuse where the algorithm generates a set of detectors to distinguish the self-samples. Practica
... Show MoreThis paper presents a grey model GM(1,1) of the first rank and a variable one and is the basis of the grey system theory , This research dealt properties of grey model and a set of methods to estimate parameters of the grey model GM(1,1) is the least square Method (LS) , weighted least square method (WLS), total least square method (TLS) and gradient descent method (DS). These methods were compared based on two types of standards: Mean square error (MSE), mean absolute percentage error (MAPE), and after comparison using simulation the best method was applied to real data represented by the rate of consumption of the two types of oils a Heavy fuel (HFO) and diesel fuel (D.O) and has been applied several tests to
... Show MoreMultilocus haplotype analysis of candidate variants with genome wide association studies (GWAS) data may provide evidence of association with disease, even when the individual loci themselves do not. Unfortunately, when a large number of candidate variants are investigated, identifying risk haplotypes can be very difficult. To meet the challenge, a number of approaches have been put forward in recent years. However, most of them are not directly linked to the disease-penetrances of haplotypes and thus may not be efficient. To fill this gap, we propose a mixture model-based approach for detecting risk haplotypes. Under the mixture model, haplotypes are clustered directly according to their estimated d