This paper presents a hybrid approach for solving null values problem; it hybridizes rough set theory with intelligent swarm algorithm. The proposed approach is a supervised learning model. A large set of complete data called learning data is used to find the decision rule sets that then have been used in solving the incomplete data problem. The intelligent swarm algorithm is used for feature selection which represents bees algorithm as heuristic search algorithm combined with rough set theory as evaluation function. Also another feature selection algorithm called ID3 is presented, it works as statistical algorithm instead of intelligent algorithm. A comparison between those two approaches is made in their performance for null values estimation through working with rough set theory. The results obtained from most code sets show that Bees algorithm better than ID3 in decreasing the number of extracted rules without affecting the accuracy and increasing the accuracy ratio of null values estimation, especially when the number of null values is increasing
Reliable data transfer and energy efficiency are the essential considerations for network performance in resource-constrained underwater environments. One of the efficient approaches for data routing in underwater wireless sensor networks (UWSNs) is clustering, in which the data packets are transferred from sensor nodes to the cluster head (CH). Data packets are then forwarded to a sink node in a single or multiple hops manners, which can possibly increase energy depletion of the CH as compared to other nodes. While several mechanisms have been proposed for cluster formation and CH selection to ensure efficient delivery of data packets, less attention has been given to massive data co
This work bases on encouraging a generous and conceivable estimation for modified an algorithm for vehicle travel times on a highway from the eliminated traffic information using set aside camera image groupings. The strategy for the assessment of vehicle travel times relies upon the distinctive verification of traffic state. The particular vehicle velocities are gotten from acknowledged vehicle positions in two persistent images by working out the distance covered all through elapsed past time doing mollification between the removed traffic flow data and cultivating a plan to unequivocally predict vehicle travel times. Erbil road data base is used to recognize road locales around road segments which are projected into the commended camera
... Show MoreThis study aim to identify the concept of web based information systems since its one of the important topics that is usually omitted by our organizations, in addition to, designing a web based information system in order to manage the customers data of Al- Rasheed bank, as a unified information system that is specialized to the banking deals of the customers with the bank, and providing a suggested model to apply the virtual private network as a tool that is to protect the transmitted data through the web based information system.
This study is considered important because it deals with one of the vital topics nowadays, namely: how to make it possible to use a distributed informat
... Show MoreChemical pollution is a very important issue that people suffer from and it often affects the nature of health of society and the future of the health of future generations. Consequently, it must be considered in order to discover suitable models and find descriptions to predict the performance of it in the forthcoming years. Chemical pollution data in Iraq take a great scope and manifold sources and kinds, which brands it as Big Data that need to be studied using novel statistical methods. The research object on using Proposed Nonparametric Procedure NP Method to develop an (OCMT) test procedure to estimate parameters of linear regression model with large size of data (Big Data) which comprises many indicators associated with chemi
... Show MoreThe aim of the research is to demonstrate of the relation and the influence of the components of economic intelligence (strategic alertness, information security policy, impact policy) in achieving of economic growth (creativity, competitiveness, quality improvement). The questionnaire was used as a main tool for selected sample. Answers analyzed by using the statistical program (SPSS) to calculate the arithmetic mean, standard deviation, weight percentage, correlation, F test, and Squared factor (R2).
The research derived its importance from the distinguished role of information systems in the work of industrial companies, and its impact toward achieving economic growth rates in its various activities. T
... Show MoreThe purpose of this study is to investigate the research on artificial intelligence algorithms in football, specifically in relation to player performance prediction and injury prevention. To accomplish this goal, scholarly resources including Google Scholar, ResearchGate, Springer, and Scopus were used to provide a systematic examination of research done during the last ten years (2015–2025). Through a systematic procedure that included data collection, study selection based on predetermined criteria, categorisation based on AI applications in football, and assessment of major research problems, trends, and prospects, almost fifty papers were found and analysed. Summarising AI applications in football for performance and injury p
... Show MoreThe primary goal of root canal treatment (RCT) is to expel the presence of any necrotic or vital tissue, microbes and their byproducts from the canal space before press forward with the following steps of the RCT procedures. Although this is difficult to attain, various strives had been practiced by employing chemical and mechanical methods to eliminate as much microorganisms as possible and make the canal space valid for the obturation materials to be received. The aim of this review is to demonstrate some of what new remedies that could be used as root canal disinfectant by summarizing the recent studies regarding the efficacy of different natural products against the most persistence microbiota that could be responsible for most
... Show MoreThis research aims to analyze and simulate biochemical real test data for uncovering the relationships among the tests, and how each of them impacts others. The data were acquired from Iraqi private biochemical laboratory. However, these data have many dimensions with a high rate of null values, and big patient numbers. Then, several experiments have been applied on these data beginning with unsupervised techniques such as hierarchical clustering, and k-means, but the results were not clear. Then the preprocessing step performed, to make the dataset analyzable by supervised techniques such as Linear Discriminant Analysis (LDA), Classification And Regression Tree (CART), Logistic Regression (LR), K-Nearest Neighbor (K-NN), Naïve Bays (NB
... Show MoreVisual analytics becomes an important approach for discovering patterns in big data. As visualization struggles from high dimensionality of data, issues like concept hierarchy on each dimension add more difficulty and make visualization a prohibitive task. Data cube offers multi-perspective aggregated views of large data sets and has important applications in business and many other areas. It has high dimensionality, concept hierarchy, vast number of cells, and comes with special exploration operations such as roll-up, drill-down, slicing and dicing. All these issues make data cubes very difficult to visually explore. Most existing approaches visualize a data cube in 2D space and require preprocessing steps. In this paper, we propose a visu
... Show More