Permeability estimation is a vital step in reservoir engineering due to its effect on reservoir's characterization, planning for perforations, and economic efficiency of the reservoirs. The core and well-logging data are the main sources of permeability measuring and calculating respectively. There are multiple methods to predict permeability such as classic, empirical, and geostatistical methods. In this research, two statistical approaches have been applied and compared for permeability prediction: Multiple Linear Regression and Random Forest, given the (M) reservoir interval in the (BH) Oil Field in the northern part of Iraq. The dataset was separated into two subsets: Training and Testing in order to cross-validate the accuracy and the performance of the algorithms. The random forest algorithm was the most accurate method leading to lowest Root Mean Square Prediction Error (RMSPE) and highest Adjusted R-Square than multiple linear regression algorithm for both training and testing subset respectively. Thus, random Forest algorithm is more trustable in permeability prediction in non-cored intervals and its distribution in the geological model.
Radiological assessment for the East Baghdad oilfield-southern part was conducted in the current study. 10 samples (scale, soil, sludge, water, and oil) from the different stages of oil production were collected. 232Th, 226Ra, and 40K in the samples were analyzed with 40% efficiency for Gamma spectrometry. system based on HPGe. The findings indicated that the examined sites exhibit comparatively lower levels of NORM contamination, in contrast to other global oilfields. Nevertheless, certain areas, particularly those within separation stages, demonstrate relatively elevated NORM concentrations exceeding the global average in soil and sludge. The maximum value of 226Ra, 232Th, was found in sludge sample the findings indicated that ove
... Show MoreThe petrophysical characteristics of five wells drilled into the Sa'di Formation in the Halfaya oil field were evaluated using IP software to determine a reservoir and explore hydrocarbon reserve zones. The lithology was evaluated using the M-N cross-plot method. The diagram showed that the Sa'di Formation was mainly composed of calcite (represented by the limestone region) is the main mineral in the Sa′di Reservoir. Using a density-neutron cross plot to identify the lithology showed that the formation mainly consists of limestone with minor shale. Gamma-ray logs were employed to calculate the shale quantity in each well. The porosity at weak hole intervals was calculated using a sonic log and neutron-density log at the reservoir
... Show MoreFerritin is a key organizer of protected deregulation, particularly below risky hyperferritinemia, by straight immune-suppressive and pro-inflammatory things. , We conclude that there is a significant association between levels of ferritin and the harshness of COVID-19. In this paper we introduce a semi- parametric method for prediction by making a combination between NN and regression models. So, two methodologies are adopted, Neural Network (NN) and regression model in design the model; the data were collected from مستشفى دار التمريض الخاص for period 11/7/2021- 23/7/2021, we have 100 person, With COVID 12 Female & 38 Male out of 50, while 26 Female & 24 Male non COVID out of 50. The input variables of the NN m
... Show MoreIn many oil fields only the BHC logs (borehole compensated sonic tool) are available to provide interval transit time (Δtp), the reciprocal of compressional wave velocity VP.
To calculate the rock elastic or inelastic properties, to detect gas-bearing formations, the shear wave velocity VS is needed. Also VS is useful in fluid identification and matrix mineral identification.
Because of the lack of wells with shear wave velocity data, so many empirical models have been developed to predict the shear wave velocity from compressional wave velocity. Some are mathematical models others used the multiple regression method and neural network technique.
In this study a number of em
... Show MoreWith the escalation of cybercriminal activities, the demand for forensic investigations into these crimeshas grown significantly. However, the concept of systematic pre-preparation for potential forensicexaminations during the software design phase, known as forensic readiness, has only recently gainedattention. Against the backdrop of surging urban crime rates, this study aims to conduct a rigorous andprecise analysis and forecast of crime rates in Los Angeles, employing advanced Artificial Intelligence(AI) technologies. This research amalgamates diverse datasets encompassing crime history, varioussocio-economic indicators, and geographical locations to attain a comprehensive understanding of howcrimes manifest within the city. Lev
... Show MoreSupport vector machine (SVM) is a popular supervised learning algorithm based on margin maximization. It has a high training cost and does not scale well to a large number of data points. We propose a multiresolution algorithm MRH-SVM that trains SVM on a hierarchical data aggregation structure, which also serves as a common data input to other learning algorithms. The proposed algorithm learns SVM models using high-level data aggregates and only visits data aggregates at more detailed levels where support vectors reside. In addition to performance improvements, the algorithm has advantages such as the ability to handle data streams and datasets with imbalanced classes. Experimental results show significant performance improvements in compa
... Show MoreReal life scheduling problems require the decision maker to consider a number of criteria before arriving at any decision. In this paper, we consider the multi-criteria scheduling problem of n jobs on single machine to minimize a function of five criteria denoted by total completion times (∑), total tardiness (∑), total earliness (∑), maximum tardiness () and maximum earliness (). The single machine total tardiness problem and total earliness problem are already NP-hard, so the considered problem is strongly NP-hard.
We apply two local search algorithms (LSAs) descent method (DM) and simulated annealing method (SM) for the 1// (∑∑∑
... Show MoreShadow removal is crucial for robot and machine vision as the accuracy of object detection is greatly influenced by the uncertainty and ambiguity of the visual scene. In this paper, we introduce a new algorithm for shadow detection and removal based on different shapes, orientations, and spatial extents of Gaussian equations. Here, the contrast information of the visual scene is utilized for shadow detection and removal through five consecutive processing stages. In the first stage, contrast filtering is performed to obtain the contrast information of the image. The second stage involves a normalization process that suppresses noise and generates a balanced intensity at a specific position compared to the neighboring intensit
... Show MoreGeographic Information Systems (GIS) are obtaining a significant role in handling strategic applications in which data are organized as records of multiple layers in a database. Furthermore, GIS provide multi-functions like data collection, analysis, and presentation. Geographic information systems have assured their competence in diverse fields of study via handling various problems for numerous applications. However, handling a large volume of data in the GIS remains an important issue. The biggest obstacle is designing a spatial decision-making framework focused on GIS that manages a broad range of specific data to achieve the right performance. It is very useful to support decision-makers by providing GIS-based decision support syste
... Show More