Twitter data analysis is an emerging field of research that utilizes data collected from Twitter to address many issues such as disaster response, sentiment analysis, and demographic studies. The success of data analysis relies on collecting accurate and representative data of the studied group or phenomena to get the best results. Various twitter analysis applications rely on collecting the locations of the users sending the tweets, but this information is not always available. There are several attempts at estimating location based aspects of a tweet. However, there is a lack of attempts on investigating the data collection methods that are focused on location. In this paper, we investigate the two methods for obtaining location-based data provided by Twitter API, Twitter places and Geocode parameters. We studied these methods to determine their accuracy and their suitability for research. The study concludes that the places method is the more accurate, but it excludes a lot of the data, while the geocode method provides us with more data, but special attention needs to be paid to outliers. Copyright © Research Institute for Intelligent Computer Systems, 2018. All rights reserved.
This article describes how to predict different types of multiple reflections in pre-track seismic data. The characteristics of multiple reflections can be expressed as a combination of the characteristics of primary reflections. Multiple velocities always come in lower magnitude than the primaries, this is the base for separating them during Normal Move Out correction. The muting procedure is applied in Time-Velocity analysis domain. Semblance plot is used to diagnose multiples availability and judgment for muting dimensions. This processing procedure is used to eliminate internal multiples from real 2D seismic data from southern Iraq in two stages. The first is conventional Normal Move Out correction and velocity auto picking and
... Show MoreSchmidt Cassegrain spider obscuration telescope (SCT) is one of the types of observations operating with a concave mirror. It combines several lenses and mirrors working together as an optical system. The light rays fall into the tube from the main mirror and gather on another smaller mirror called a secondary mirror. Unlike the formation of Newton's telescope, no light is made from the secondary mirror out the side of the tube but is directed to the middle of the main mirror. There is an opening in the middle of the main mirror so the light beam can go out and direct the vision lens system. The secondary mirror is located in the middle of a glass slice and is installed by thin carriers. The function of this board is to correct the portr
... Show MoreThe research aims to demonstrate the impact of tax techniques on the quality of services provided to income taxpayers by studying the correlational and influencing relationships between the exploited variable (tax techniques) and the dependent variable (the quality of services provided to income taxpayers), and in line with the research objectives, the main hypothesis of the research was formulated (there is a relationship Significance between tax techniques and the quality of services provided to income taxpayers) a number of sub-hypotheses emerged from this hypothesis that were stated in the research methodology, and a number of conclusions were reached, the most important of which were (through the use of the correlation coeff
... Show MoreIn the current Windows version (Vista), as in all previous versions, creating a user account without setting a password is possible. For a personal PC this might be without too much risk, although it is not recommended, even by Microsoft itself. However, for business computers it is necessary to restrict access to the computers, starting with defining a different password for every user account. For the earlier versions of Windows, a lot of resources can be found giving advice how to construct passwords of user accounts. In some extent they contain remarks concerning the suitability of their solution for Windows Vista. But all these resources are not very precise about what kind of passwords the user must use. To assess the protection of pa
... Show MoreIn this paper, several types of space-time fractional partial differential equations has been solved by using most of special double linear integral transform â€double Sumudu â€. Also, we are going to argue the truth of these solutions by another analytically method “invariant subspace methodâ€. All results are illustrative numerically and graphically.
Expansive soil is one of the most serious problems that face engineers during the execution of any infrastructure projects. Soil stabilization using chemical admixture is one of the most traditional and widespread methods of soil improvement. Nevertheless, soil improvement on site is one of the most economical solutions for many engineering applications. Using construction and demolishing waste in soil stabilization is still under research., The aim of this study is to identify the effect of using concrete demolishing waste (CDW) in soil stabilization. Serious tests were conducted to investigate the changes in the geotechnical properties of the natural soil stabilized with CDW. From the results, it is concluded that the
... Show MoreIron is one of the abundant elements on earth that is an essential element for humans and may be a troublesome element in water supplies. In this research an AAN model was developed to predict iron concentrations in the location of Al- Wahda water treatment plant in Baghdad city by water quality assessment of iron concentrations at seven WTPs up stream Tigris River. SPSS software was used to build the ANN model. The input data were iron concentrations in the raw water for the period 2004-2011. The results indicated the best model predicted Iron concentrations at Al-Wahda WTP with a coefficient of determination 0.9142. The model used one hidden layer with two nodes and the testing error was 0.834. The ANN model coul
... Show MoreFace Identification system is an active research area in these years. However, the accuracy and its dependency in real life systems are still questionable. Earlier research in face identification systems demonstrated that LBP based face recognition systems are preferred than others and give adequate accuracy. It is robust against illumination changes and considered as a high-speed algorithm. Performance metrics for such systems are calculated from time delay and accuracy. This paper introduces an improved face recognition system that is build using C++ programming language with the help of OpenCV library. Accuracy can be increased if a filter or combinations of filters are applied to the images. The accuracy increases from 95.5% (without ap
... Show More