The gas-lift method is crucial for maintaining oil production, particularly from an established field when the natural energy of the reservoirs is depleted. To maximize oil production, a major field's gas injection rate must be distributed as efficiently as possible across its gas-lift network system. Common gas-lift optimization techniques may lose their effectiveness and become unable to replicate the gas-lift optimum in a large network system due to problems with multi-objective, multi-constrained & restricted gas injection rate distribution. The main objective of the research is to determine the possibility of using the genetic algorithm (GA) technique to achieve the optimum distribution for the continuous gas-lift injection flows in the structure of the network of Zubair, oil field with 10 gas-lift injected wells. This will be done through numerical simulation and modeling studies. The overall enhancement of the filed production rate is found to have increased from 15767 STB/day to 19847 STB/day. The well's reservoir pressure and water cut sensitivity studies are carried out to study the possible impacts of these elements upon the well and its efficiency through the course of the field. Our understanding of the potential benefits of utilizing gas lift techniques in a field from a technical and economical point of view is deepened by the use of examples from economic analysis. Furthermore, even though the idea of employing GA in this manner is not new, this work discusses GA-based optimization methodologies for increasing the oil production rate by using gas lifting in a Zubair oilfield. In order to assign gas injection rates to specific wells in a network throughout the field using limited gas injection rates, the model for optimization will be laid out step-by-step making it simple to understand and employ as a guide, especially for the front-line production technicians involved in the development and design of gas-lift systems.
In this paper, an algorithm for binary codebook design has been used in vector quantization technique, which is used to improve the acceptability of the absolute moment block truncation coding (AMBTC) method. Vector quantization (VQ) method is used to compress the bitmap (the output proposed from the first method (AMBTC)). In this paper, the binary codebook can be engender for many images depending on randomly chosen to the code vectors from a set of binary images vectors, and this codebook is then used to compress all bitmaps of these images. The chosen of the bitmap of image in order to compress it by using this codebook based on the criterion of the average bitmap replacement error (ABPRE). This paper is suitable to reduce bit rates
... Show MoreIn this article, we design an optimal neural network based on new LM training algorithm. The traditional algorithm of LM required high memory, storage and computational overhead because of it required the updated of Hessian approximations in each iteration. The suggested design implemented to converts the original problem into a minimization problem using feed forward type to solve non-linear 3D - PDEs. Also, optimal design is obtained by computing the parameters of learning with highly precise. Examples are provided to portray the efficiency and applicability of this technique. Comparisons with other designs are also conducted to demonstrate the accuracy of the proposed design.
Background and Aim: due to the rapid growth of data communication and multimedia system applications, security becomes a critical issue in the communication and storage of images. This study aims to improve encryption and decryption for various types of images by decreasing time consumption and strengthening security. Methodology: An algorithm is proposed for encrypting images based on the Carlisle Adams and Stafford Tavares CAST block cipher algorithm with 3D and 2D logistic maps. A chaotic function that increases the randomness in the encrypted data and images, thereby breaking the relation sequence through the encryption procedure, is introduced. The time is decreased by using three secure and private S-Boxes rather than using si
... Show MoreFinding communities of connected individuals in complex networks is challenging, yet crucial for understanding different real-world societies and their interactions. Recently attention has turned to discover the dynamics of such communities. However, detecting accurate community structures that evolve over time adds additional challenges. Almost all the state-of-the-art algorithms are designed based on seemingly the same principle while treating the problem as a coupled optimization model to simultaneously identify community structures and their evolution over time. Unlike all these studies, the current work aims to individually consider this three measures, i.e. intra-community score, inter-community score, and evolution of community over
... Show MoreVisualization of subsurface geology is mainly considered as the framework of the required structure to provide distribution of petrophysical properties. The geological model helps to understand the behavior of the fluid flow in the porous media that is affected by heterogeneity of the reservoir and helps in calculating the initial oil in place as well as selecting accurate new well location. In this study, a geological model is built for Qaiyarah field, tertiary reservoir, relying on well data from 48 wells, including the location of wells, formation tops and contour map. The structural model is constructed for the tertiary reservoir, which is an asymmetrical anticline consisting of two domes separated by a saddle. It is found that
... Show MoreThe estimation of the parameters of linear regression is based on the usual Least Square method, as this method is based on the estimation of several basic assumptions. Therefore, the accuracy of estimating the parameters of the model depends on the validity of these hypotheses. The most successful technique was the robust estimation method which is minimizing maximum likelihood estimator (MM-estimator) that proved its efficiency in this purpose. However, the use of the model becomes unrealistic and one of these assumptions is the uniformity of the variance and the normal distribution of the error. These assumptions are not achievable in the case of studying a specific problem that may include complex data of more than one model. To
... Show More