The investigation of machine learning techniques for addressing missing well-log data has garnered considerable interest recently, especially as the oil and gas sector pursues novel approaches to improve data interpretation and reservoir characterization. Conversely, for wells that have been in operation for several years, conventional measurement techniques frequently encounter challenges related to availability, including the lack of well-log data, cost considerations, and precision issues. This study's objective is to enhance reservoir characterization by automating well-log creation using machine-learning techniques. Among the methods are multi-resolution graph-based clustering and the similarity threshold method. By using cutting-edge machine learning techniques, our methodology shows a notable improvement in the precision and effectiveness of well-log predictions. Standard well logs from a reference well were used to train machine learning models. Additionally, conventional wireline logs were used as input to estimate facies for unclassified wells lacking core data. R-squared analysis and goodness-of-fit tests provide a numerical assessment of model performance, strengthening the validation process. The multi-resolution graph-based clustering and similarity threshold approaches have demonstrated notable results, achieving an accuracy of nearly 98%. Applying these techniques to data from eighteen wells produced precise results, demonstrating the effectiveness of our approach in enhancing the reliability and quality of well-log production.
Cloud computing represents the most important shift in computing and information technology (IT). However, security and privacy remain the main obstacles to its widespread adoption. In this research we will review the security and privacy challenges that affect critical data in cloud computing and identify solutions that are used to address these challenges. Some questions that need answers are: (a) User access management, (b) Protect privacy of sensitive data, (c) Identity anonymity to protect the Identity of user and data file. To answer these questions, a systematic literature review was conducted and structured interview with several security experts working on cloud computing security to investigate the main objectives of propo
... Show MoreOne wide-ranging category of open source data is that referring to geospatial information web sites. Despite the advantages of such open source data, including ease of access and cost free data, there is a potential issue of its quality. This article tests the horizontal positional accuracy and possible integration of four web-derived geospatial datasets: OpenStreetMap (OSM), Google Map, Google Earth and Wikimapia. The evaluation was achieved by combining the tested information with reference field survey data for fifty road intersections in Baghdad, Iraq. The results indicate that the free geospatial data can be used to enhance authoritative maps especially small scale maps.
Abstract
Most of the industrial organization in the world became suffering from the problem of the pollution of the poisonous chemicals things, this urged to depend on the principle of the responsible production, because it has the positive role by dealing with these chemical things and to safe the health of the society, due to the main goal of this study is to restrict the role responsible production in accomplishing the system of the environmental management through an actual study in the northern gas company in Kirkuk province, the topic has acquired a big importance bacause there were a limited number of studies and res
... Show MoreMixed-effects conditional logistic regression is evidently more effective in the study of qualitative differences in longitudinal pollution data as well as their implications on heterogeneous subgroups. This study seeks that conditional logistic regression is a robust evaluation method for environmental studies, thru the analysis of environment pollution as a function of oil production and environmental factors. Consequently, it has been established theoretically that the primary objective of model selection in this research is to identify the candidate model that is optimal for the conditional design. The candidate model should achieve generalizability, goodness-of-fit, parsimony and establish equilibrium between bias and variab
... Show MoreSoil invertebrates community an important role as part of essential food chain and responsible for the decomposition in the soil, helps soil aeration , nutrients recycling and increase agricultural production by providing the essential elements necessary for photosynthesis and energy flow in ecosystems.The aim of the present study was to investigate the soil invertebrates community in one of the date palms plantation in Aljaderia district South of Baghdad, , and their relationships with some physical and chemical properties of the soil , as Five randomly distributed replicates of soil samples were collected monthly. Invertebrates samples were sorted from the soil with two methods, direct method to isolate large invertebrates and indirec
... Show More