Carbonate reservoirs are an essential source of hydrocarbons worldwide, and their petrophysical properties play a crucial role in hydrocarbon production. Carbonate reservoirs' most critical petrophysical properties are porosity, permeability, and water saturation. A tight reservoir refers to a reservoir with low porosity and permeability, which means it is difficult for fluids to move from one side to another. This study's primary goal is to evaluate reservoir properties and lithological identification of the SADI Formation in the Halfaya oil field. It is considered one of Iraq's most significant oilfields, 35 km south of Amarah. The Sadi formation consists of four units: A, B1, B2, and B3. Sadi A was excluded as it was not filled with hydrocarbons. The structural and petrophysical models were built based on data gathered from five oil wells. The data from the available well logs, including RHOB, NPHI, SONIC, Gamma-ray, Caliper, and resistivity logs, was used to calculate the petrophysical properties. These logs were analyzed and corrected for environmental factors using IP V3.5 software. where the average formation water resistivity (Rw = 0.04), average mud filtrate resistivity (Rmf = 0.06), and Archie's parameters (m = 2, n = 1.9, and a = 1) were determined. The well-log data values calculated the porosity, permeability, water saturation, and net-to-gross thickness ratio (N/G).
In this study, the mobile phone traces concern an ephemeral event which represents important densities of people. This research aims to study city pulse and human mobility evolution that would be arise during specific event (Armada festival), by modelling and simulating human mobility of the observed region, depending on CDRs (Call Detail Records) data. The most pivot questions of this research are: Why human mobility studied? What are the human life patterns in the observed region inside Rouen city during Armada festival? How life patterns and individuals' mobility could be extracted for this region from mobile DB (CDRs)? The radius of gyration parameter has been applied to elaborate human life patterns with regards to (work, off) days for
... Show MoreData hiding is the process of encoding extra information in an image by making small modification to its pixels. To be practical, the hidden data must be perceptually invisible yet robust to common signal processing operations. This paper introduces a scheme for hiding a signature image that could be as much as 25% of the host image data and hence could be used both in digital watermarking as well as image/data hiding. The proposed algorithm uses orthogonal discrete wavelet transforms with two zero moments and with improved time localization called discrete slantlet transform for both host and signature image. A scaling factor ? in frequency domain control the quality of the watermarked images. Experimental results of signature image
... Show MoreData compression offers an attractive approach to reducing communication costs using available bandwidth effectively. It makes sense to pursue research on developing algorithms that can most effectively use available network. It is also important to consider the security aspect of the data being transmitted is vulnerable to attacks. The basic aim of this work is to develop a module for combining the operation of compression and encryption on the same set of data to perform these two operations simultaneously. This is achieved through embedding encryption into compression algorithms since both cryptographic ciphers and entropy coders bear certain resemblance in the sense of secrecy. First in the secure compression module, the given text is p
... Show MoreIn this paper, integrated quantum neural network (QNN), which is a class of feedforward
neural networks (FFNN’s), is performed through emerging quantum computing (QC) with artificial neural network(ANN) classifier. It is used in data classification technique, and here iris flower data is used as a classification signals. For this purpose independent component analysis (ICA) is used as a feature extraction technique after normalization of these signals, the architecture of (QNN’s) has inherently built in fuzzy, hidden units of these networks (QNN’s) to develop quantized representations of sample information provided by the training data set in various graded levels of certainty. Experimental results presented here show that
... Show MoreThe problem of research is that there are differences between learners in processing in formation in general and there is variation at the learners level perform scrolling skill of the passes up and down by the volley ball .Therefore the researchers decided to conduct astudy through which identify the relationship between information processing and the skill of scrolling from the top and bottom by the volleyball. The researchers used the descriptive approach by themethod of interconnectivity .Asampleconsist of21 students from first staye in collage of physical education and sports science for Girls(university of Baghdad) and attest has been applied(process information and scroll up and down) on the research sample after the required
... Show MoreThis research aims to the possibility of evaluating the strategic performance of the State Board for Antiquities and Heritage (SBAH) using a balanced scorecard of four criteria (Financial, Customers, Internal Processes, and Learning and Growth). The main challenge was that the State Board use traditional evaluation in measuring employee performance, activities, and projects. Case study and field interviews methodology has been adopted in this research with a sample consisting of the Chairman of the State Board, 6 General Managers, and 7 Department Managers who are involved in evaluating the strategic performance and deciding the suitable answers on the checklists to analyze it according to the 7-points Likert scale. Data analysis re
... Show MoreIn this study, we investigate the behavior of the estimated spectral density function of stationary time series in the case of missing values, which are generated by the second order Autoregressive (AR (2)) model, when the error term for the AR(2) model has many of continuous distributions. The Classical and Lomb periodograms used to study the behavior of the estimated spectral density function by using the simulation.
The current study aims to identify the needs in the stories of the Brothers Grimm. The research sample consisted of (3) stories, namely: 1- The story of the Thorn Rose (Sleeping Beauty) 2- The story of Snow White 3- The story of Little Red Riding Hood. The number of pages analyzed reached (15.5) pages, and to achieve the research objectives, Murray's classification of needs was adopted, which contains (36) basic needs that are further divided into (129) sub-needs. The idea was adopted as a unit of analysis and repetition as a unit of enumeration, Reliability was extracted in two ways: 1- Agreement between the researcher and himself over time, where the agreement coefficient reached 97%. The second was agreement between the researcher and tw
... Show MoreThe food web is a crucial conceptual tool for understanding the dynamics of energy transfer in an ecosystem, as well as the feeding relationships among species within a community. It also reveals species interactions and community structure. As a result, an ecological food web system with two predators competing for prey while experiencing fear was developed and studied. The properties of the solution of the system were determined, and all potential equilibrium points were identified. The dynamic behavior in their immediate surroundings was examined both locally and globally. The system’s persistence demands were calculated, and all conceivable forms of local bifurcations were investigated. With the aid of MATLAB, a numerical simu
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for