Twitter data analysis is an emerging field of research that utilizes data collected from Twitter to address many issues such as disaster response, sentiment analysis, and demographic studies. The success of data analysis relies on collecting accurate and representative data of the studied group or phenomena to get the best results. Various twitter analysis applications rely on collecting the locations of the users sending the tweets, but this information is not always available. There are several attempts at estimating location based aspects of a tweet. However, there is a lack of attempts on investigating the data collection methods that are focused on location. In this paper, we investigate the two methods for obtaining location-based data provided by Twitter API, Twitter places and Geocode parameters. We studied these methods to determine their accuracy and their suitability for research. The study concludes that the places method is the more accurate, but it excludes a lot of the data, while the geocode method provides us with more data, but special attention needs to be paid to outliers. Copyright © Research Institute for Intelligent Computer Systems, 2018. All rights reserved.
Reliable data transfer and energy efficiency are the essential considerations for network performance in resource-constrained underwater environments. One of the efficient approaches for data routing in underwater wireless sensor networks (UWSNs) is clustering, in which the data packets are transferred from sensor nodes to the cluster head (CH). Data packets are then forwarded to a sink node in a single or multiple hops manners, which can possibly increase energy depletion of the CH as compared to other nodes. While several mechanisms have been proposed for cluster formation and CH selection to ensure efficient delivery of data packets, less attention has been given to massive data co
A two time step stochastic multi-variables multi-sites hydrological data forecasting model was developed and verified using a case study. The philosophy of this model is to use the cross-variables correlations, cross-sites correlations and the two steps time lag correlations simultaneously, for estimating the parameters of the model which then are modified using the mutation process of the genetic algorithm optimization model. The objective function that to be minimized is the Akiake test value. The case study is of four variables and three sites. The variables are the monthly air temperature, humidity, precipitation, and evaporation; the sites are Sulaimania, Chwarta, and Penjwin, which are located north Iraq. The model performance was
... Show MorePurpose – The Cloud computing (CC) and its services have enabled the information centers of organizations to adapt their informatic and technological infrastructure and making it more appropriate to develop flexible information systems in the light of responding to the informational and knowledge needs of their users. In this context, cloud-data governance has become more complex and dynamic, requiring an in-depth understanding of the data management strategy at these centers in terms of: organizational structure and regulations, people, technology, process, roles and responsibilities. Therefore, our paper discusses these dimensions as challenges that facing information centers in according to their data governance and the impa
... Show MoreMalicious software (malware) performs a malicious function that compromising a computer system’s security. Many methods have been developed to improve the security of the computer system resources, among them the use of firewall, encryption, and Intrusion Detection System (IDS). IDS can detect newly unrecognized attack attempt and raising an early alarm to inform the system about this suspicious intrusion attempt. This paper proposed a hybrid IDS for detection intrusion, especially malware, with considering network packet and host features. The hybrid IDS designed using Data Mining (DM) classification methods that for its ability to detect new, previously unseen intrusions accurately and automatically. It uses both anomaly and misuse dete
... Show MoreAnomaly detection is still a difficult task. To address this problem, we propose to strengthen DBSCAN algorithm for the data by converting all data to the graph concept frame (CFG). As is well known that the work DBSCAN method used to compile the data set belong to the same species in a while it will be considered in the external behavior of the cluster as a noise or anomalies. It can detect anomalies by DBSCAN algorithm can detect abnormal points that are far from certain set threshold (extremism). However, the abnormalities are not those cases, abnormal and unusual or far from a specific group, There is a type of data that is do not happen repeatedly, but are considered abnormal for the group of known. The analysis showed DBSCAN using the
... Show MoreIn data transmission a change in single bit in the received data may lead to miss understanding or a disaster. Each bit in the sent information has high priority especially with information such as the address of the receiver. The importance of error detection with each single change is a key issue in data transmission field.
The ordinary single parity detection method can detect odd number of errors efficiently, but fails with even number of errors. Other detection methods such as two-dimensional and checksum showed better results and failed to cope with the increasing number of errors.
Two novel methods were suggested to detect the binary bit change errors when transmitting data in a noisy media.Those methods were: 2D-Checksum me
Diarrhea is one of the most commonly encountered minor ailments in the community pharmacies. It is associated with significant morbidity and mortality. However, the majority of pharmacists in Iraq did not manage diarrheal cases in a proper way. Therefore, the current study aimed to evaluate the benefit of a new mobile application (diarrhea management step by step) to improve the pharmacist's role in the management of diarrhea. The study was conducted from 21th September to 21th October 2021 using a pre-post design via a simulated patient (SP) technique. A validated diarrhea scenario was presented to each pharmacist by the SP twice, once before and the other after giving the mobile application to the pharmacist. Furthermore, pharmaci
... Show MoreA band rationing method is applied to calculate the salinity index (SI) and Normalized Multi-Band Drought Index (NMDI) as pre-processing to take Agriculture decision in these areas is presented. To separate the land from other features that exist in the scene, the classical classification method (Maximum likelihood classification) is used by classified the study area to multi classes (Healthy vegetation (HV), Grasslands (GL), Water (W), Urban (U), Bare Soil (BS)). A Landsat 8 satellite image of an area in the south of Iraq are used, where the land cover is classified according to indicator ranges for each (SI) and (NMDI).
Purpose: to demonstrate the possibility of moving to electronic data exchange dimensions (regulatory requirements, technical requirements, human requirements, senior management support) to simplify the work procedures dimensions (modern procedures, clarity of procedures, short procedures, availability of information and means required. The simplicity of the models used because of its importance to keep abreast of recent developments in the service of municipal works through the application of electronic data interchange, which simplifies procedures and out of the routine in the performance of the work of municipal departments has developed. It was applied to Municipality (Hilla) so that the transformation
... Show More