The investigation of machine learning techniques for addressing missing well-log data has garnered considerable interest recently, especially as the oil and gas sector pursues novel approaches to improve data interpretation and reservoir characterization. Conversely, for wells that have been in operation for several years, conventional measurement techniques frequently encounter challenges related to availability, including the lack of well-log data, cost considerations, and precision issues. This study's objective is to enhance reservoir characterization by automating well-log creation using machine-learning techniques. Among the methods are multi-resolution graph-based clustering and the similarity threshold method. By using cutting-edge machine learning techniques, our methodology shows a notable improvement in the precision and effectiveness of well-log predictions. Standard well logs from a reference well were used to train machine learning models. Additionally, conventional wireline logs were used as input to estimate facies for unclassified wells lacking core data. R-squared analysis and goodness-of-fit tests provide a numerical assessment of model performance, strengthening the validation process. The multi-resolution graph-based clustering and similarity threshold approaches have demonstrated notable results, achieving an accuracy of nearly 98%. Applying these techniques to data from eighteen wells produced precise results, demonstrating the effectiveness of our approach in enhancing the reliability and quality of well-log production.
Some of the main challenges in developing an effective network-based intrusion detection system (IDS) include analyzing large network traffic volumes and realizing the decision boundaries between normal and abnormal behaviors. Deploying feature selection together with efficient classifiers in the detection system can overcome these problems. Feature selection finds the most relevant features, thus reduces the dimensionality and complexity to analyze the network traffic. Moreover, using the most relevant features to build the predictive model, reduces the complexity of the developed model, thus reducing the building classifier model time and consequently improves the detection performance. In this study, two different sets of select
... Show MoreTo expedite the learning process, a group of algorithms known as parallel machine learning algorithmscan be executed simultaneously on several computers or processors. As data grows in both size andcomplexity, and as businesses seek efficient ways to mine that data for insights, algorithms like thesewill become increasingly crucial. Data parallelism, model parallelism, and hybrid techniques are justsome of the methods described in this article for speeding up machine learning algorithms. We alsocover the benefits and threats associated with parallel machine learning, such as data splitting,communication, and scalability. We compare how well various methods perform on a variety ofmachine learning tasks and datasets, and we talk abo
... Show MoreAmputation of the upper limb significantly hinders the ability of patients to perform activities of daily living. To address this challenge, this paper introduces a novel approach that combines non-invasive methods, specifically Electroencephalography (EEG) and Electromyography (EMG) signals, with advanced machine learning techniques to recognize upper limb movements. The objective is to improve the control and functionality of prosthetic upper limbs through effective pattern recognition. The proposed methodology involves the fusion of EMG and EEG signals, which are processed using time-frequency domain feature extraction techniques. This enables the classification of seven distinct hand and wrist movements. The experiments conducte
... Show MoreStatistical learning theory serves as the foundational bedrock of Machine learning (ML), which in turn represents the backbone of artificial intelligence, ushering in innovative solutions for real-world challenges. Its origins can be linked to the point where statistics and the field of computing meet, evolving into a distinct scientific discipline. Machine learning can be distinguished by its fundamental branches, encompassing supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning. Within this tapestry, supervised learning takes center stage, divided in two fundamental forms: classification and regression. Regression is tailored for continuous outcomes, while classification specializes in c
... Show More
The heterogeneity nature of carbonate reservoirs shows sever scattering of the data, therefore, one has to be cautious in using the permeability- porosity correlation for calculating permeability unless a good correlation coefficient is available. In addition, a permeability- porosity correlation technique is not enough by itself since simulation studies also require more accurate tools for reservoir description and diagnosis of flow and non-flow units.
Evaluation of reservoir characterization was conducted by this paper for Mishrif Formation in south Iraqi oil field (heterogeneous carbonate reservoir), namely the permeability-porosity correlation, the hydraulic units (HU’s) an
... Show MoreThe heterogeneity nature of carbonate reservoirs shows sever scattering of the data, therefore, one has to be cautious in using the permeability- porosity correlation for calculating permeability unless a good correlation coefficient is available. In addition, a permeability- porosity correlation technique is not enough by itself since simulation studies also require more accurate tools for reservoir description and diagnosis of flow and non-flow units. Evaluation of reservoir characterization was conducted by this paper for Mishrif Formation in south Iraqi oil field (heterogeneous carbonate reservoir), namely the permeability-porosity correlation, the hydraulic units (HU’s) and global hydraulic elements (GHE
... Show MoreThe Jeribe Formation, the Jambour oil field, is the major carbonate reservoir from the tertiary reservoirs of the Jambour field in northern Iraq, including faults. Engineers have difficulty organizing carbonate reserves since they are commonly tight and heterogeneous. This research presents a geological model of the Jeribe reservoir based on its facies and reservoir characterization data (Permeability, Porosity, Water Saturation, and Net to Gross). This research studied four wells. The geological model was constructed with the Petrel 2020.3 software. The structural maps were developed using a structural contour map of the top of the Jeribe Formation. A pillar grid model with horizons and layering was designed for each zone. Followin
... Show MoreThis paper presents a hybrid approach for solving null values problem; it hybridizes rough set theory with intelligent swarm algorithm. The proposed approach is a supervised learning model. A large set of complete data called learning data is used to find the decision rule sets that then have been used in solving the incomplete data problem. The intelligent swarm algorithm is used for feature selection which represents bees algorithm as heuristic search algorithm combined with rough set theory as evaluation function. Also another feature selection algorithm called ID3 is presented, it works as statistical algorithm instead of intelligent algorithm. A comparison between those two approaches is made in their performance for null values estima
... Show MoreThe phenomena of Dust storm take place in barren and dry regions all over the world. It may cause by intense ground winds which excite the dust and sand from soft, arid land surfaces resulting it to rise up in the air. These phenomena may cause harmful influences upon health, climate, infrastructure, and transportation. GIS and remote sensing have played a key role in studying dust detection. This study was conducted in Iraq with the objective of validating dust detection. These techniques have been used to derive dust indices using Normalized Difference Dust Index (NDDI) and Middle East Dust Index (MEDI), which are based on images from MODIS and in-situ observation based on hourly wi
The Compressional-wave (Vp) data are useful for reservoir exploration, drilling operations, stimulation, hydraulic fracturing employment, and development plans for a specific reservoir. Due to the different nature and behavior of the influencing parameters, more complex nonlinearity exists for Vp modeling purposes. In this study, a statistical relationship between compressional wave velocity and petrophysical parameters was developed from wireline log data for Jeribe formation in Fauqi oil field south Est Iraq, which is studied using single and multiple linear regressions. The model concentrated on predicting compressional wave velocity from petrophysical parameters and any pair of shear waves velocity, porosity, density, and
... Show More