The downhole flow profiles of the wells with single production tubes and mixed flow from more than one layer can be complicated, making it challenging to obtain the average pressure of each layer independently. Production log data can be used to monitor the impacts of pressure depletion over time and to determine average pressure with the use of Selective Inflow Performance (SIP). The SIP technique provides a method of determining the steady state of inflow relationship for each individual layer. The well flows at different stabilized surface rates, and for each rate, a production log is run throughout the producing interval to record both downhole flow rates and flowing pressure. PVT data can be used to convert measured in-situ rates to surface conditions. Different types of Inflow Performance Relationship (IPR) equations can be used for SIP interpretation, including the Straight-line method, Fetkovitch method, and Laminar Internal Turbulent (LIT) relations. Although the SIP method can be used for single-phase flow, the interpreter can restrict the IPR’s calculations to a particular phase. This research discusses the difficulties in estimating the average reservoir pressure in multilayered reservoir completed wells over their production life. The SIP technique has been applied to some producing wells in the south of Iraq, which are completed in multiple producing reservoirs previously tested with a formation tester to estimate reservoir pressure and other parameters. Two wells are taken in the south of Iraq region, Zubair Oil Field, one with cross flow between perforations and the other well with no cross flow. An average pressure is not calculated for layer A in Well-1, because there is no contribution rate. While the average pressure for Well-1, layer B is 3414.49 psia. Also, the average pressure for Well-2, layer H is not calculated because there is no rate contribution from this layer, and the maximum average pressure was calculated in layer G, which is about 2606.26 psia. It is also found that the presence of cross flow has no effect on SIP calculations.
Objective: the aim of this study is to invest age and determine the effect of using (2) packing
technique (conventional and new tension technique) on hardness of (2) types of heat cure acrylic
resin which are (Ivoclar and Qual dental type).
Methodology : this study was intended the using of two types of heat cure acrylic (IVoclar and
Qual dental type) which are used in construction of complete denture which packed in two different
packing technique (conventional and new tension technique) and accomplished by using a total of
(40) specimens in diameter of ( 2mm thickness, 2 cm length and 1 cm width) . This specimens were
sectioned and subdivide into (4) group each (10) specimens for one group , then signed as (A, Al B
This work implements the face recognition system based on two stages, the first stage is feature extraction stage and the second stage is the classification stage. The feature extraction stage consists of Self-Organizing Maps (SOM) in a hierarchical format in conjunction with Gabor Filters and local image sampling. Different types of SOM’s were used and a comparison between the results from these SOM’s was given.
The next stage is the classification stage, and consists of self-organizing map neural network; the goal of this stage is to find the similar image to the input image. The proposal method algorithm implemented by using C++ packages, this work is successful classifier for a face database consist of 20
... Show MoreImage compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eye
... Show MoreVariable selection is an essential and necessary task in the statistical modeling field. Several studies have triedto develop and standardize the process of variable selection, but it isdifficultto do so. The first question a researcher needs to ask himself/herself what are the most significant variables that should be used to describe a given dataset’s response. In thispaper, a new method for variable selection using Gibbs sampler techniqueshas beendeveloped.First, the model is defined, and the posterior distributions for all the parameters are derived.The new variable selection methodis tested usingfour simulation datasets. The new approachiscompared with some existingtechniques: Ordinary Least Squared (OLS), Least Absolute Shrinkage
... Show MoreIn this work, the emission spectra and atomic structure of the aluminum target had been studied theoretically using Cowan code. Cowan code was used to calculate the transitions of electrons between atomic configuration interactions using the mathematical method called (Hartree-Fock). The aluminum target can give a good emission spectrum in the XUV region at 10 nm with oscillator strength of 1.82.
The hydrodynamic properties of laser produced plasma (LPP) were investigated for the purpose of creating a light source working in the EUV region. Such a light source is very important for lithography (semiconductor manufacturing). The improved MEDUSA (Med103) code can calculate the plasma hydrodynamic properties (velocity, electron density,
In this work, the study of
Database is characterized as an arrangement of data that is sorted out and disseminated in a way that allows the client to get to the data being put away in a simple and more helpful way. However, in the era of big-data the traditional methods of data analytics may not be able to manage and process the large amount of data. In order to develop an efficient way of handling big-data, this work studies the use of Map-Reduce technique to handle big-data distributed on the cloud. This approach was evaluated using Hadoop server and applied on EEG Big-data as a case study. The proposed approach showed clear enhancement for managing and processing the EEG Big-data with average of 50% reduction on response time. The obtained results provide EEG r
... Show MoreMolecular barcoding was widely recognized as a powerful tool for the identification of organisms during the past decade; the aim of this study is to use the molecular approach to identify the diatoms by using the environmental DNA. The diatom specimens were taken from Tigris River. The environmental DNA(e DNA) extraction and analysis of sequences using the Next Generation Sequencing (NGS) method showed the highest percentage of epipelic diatom genera including Achnanthidium minutissimum (Kützing) Czarnecki, 1994 (21.1%), Cocconeis placentula Ehrenberg, 1838 (21.3%) and Nitzschia palea (Kützing) W. Smith, 1856 (16.3%).
Five species of diatoms: Achnanthidiu
... Show MoreA demonstration of the inverse kinematics is a very complex problem for redundant robot manipulator. This paper presents the solution of inverse kinematics for one of redundant robots manipulator (three link robot) by combing of two intelligent algorithms GA (Genetic Algorithm) and NN (Neural Network). The inputs are position and orientation of three link robot. These inputs are entering to Back Propagation Neural Network (BPNN). The weights of BPNN are optimized using continuous GA. The (Mean Square Error) MSE is also computed between the estimated and desired outputs of joint angles. In this paper, the fitness function in GA is proposed. The sinwave and circular for three link robot end effecter and desired trajectories are simulated b
... Show More