Amplitude variation with offset (AVO) analysis is an 1 efficient tool for hydrocarbon detection and identification of elastic rock properties and fluid types. It has been applied in the present study using reprocessed pre-stack 2D seismic data (1992, Caulerpa) from north-west of the Bonaparte Basin, Australia. The AVO response along the 2D pre-stack seismic data in the Laminaria High NW shelf of Australia was also investigated. Three hypotheses were suggested to investigate the AVO behaviour of the amplitude anomalies in which three different factors; fluid substitution, porosity and thickness (Wedge model) were tested. The AVO models with the synthetic gathers were analysed using log information to find which of these is the controlling parameter on the AVO analysis. AVO cross plots from the real pre-stack seismic data reveal AVO class IV (showing a negative intercept decreasing with offset). This result matches our modelled result of fluid substitution for the seismic synthetics. It is concluded that fluid substitution is the controlling parameter on the AVO analysis and therefore, the high amplitude anomaly on the seabed and the target horizon 9 is the result of changing the fluid content and the lithology along the target horizons. While changing the porosity has little effect on the amplitude variation with offset within the AVO cross plot. Finally, results from the wedge models show that a small change of thickness causes a change in the amplitude; however, this change in thickness gives a different AVO characteristic and a mismatch with the AVO result of the real 2D pre-stack seismic data. Therefore, a constant thin layer with changing fluids is more likely to be the cause of the high amplitude anomalies.
A new distribution, the Epsilon Skew Gamma (ESΓ ) distribution, which was first introduced by Abdulah [1], is used on a near Gamma data. We first redefine the ESΓ distribution, its properties, and characteristics, and then we estimate its parameters using the maximum likelihood and moment estimators. We finally use these estimators to fit the data with the ESΓ distribution
This paper deals to how to estimate points non measured spatial data when the number of its terms (sample spatial) a few, that are not preferred for the estimation process, because we also know that whenever if the data is large, the estimation results of the points non measured to be better and thus the variance estimate less, so the idea of this paper is how to take advantage of the data other secondary (auxiliary), which have a strong correlation with the primary data (basic) to be estimated single points of non-measured, as well as measuring the variance estimate, has been the use of technique Co-kriging in this field to build predictions spatial estimation process, and then we applied this idea to real data in th
... Show MoreIn the current digitalized world, cloud computing becomes a feasible solution for the virtualization of cloud computing resources. Though cloud computing has many advantages to outsourcing an organization’s information, but the strong security is the main aspect of cloud computing. Identity authentication theft becomes a vital part of the protection of cloud computing data. In this process, the intruders violate the security protocols and perform attacks on the organizations or user’s data. The situation of cloud data disclosure leads to the cloud user feeling insecure while using the cloud platform. The different traditional cryptographic techniques are not able to stop such kinds of attacks. BB84 protocol is the first quantum cry
... Show MoreSo muchinformation keeps on being digitized and stored in several forms, web pages, scientific articles, books, etc. so the mission of discovering information has become more and more challenging. The requirement for new IT devices to retrieve and arrange these vastamounts of informationaregrowing step by step. Furthermore, platforms of e-learning are developing to meet the intended needsof students.
The aim of this article is to utilize machine learning to determine the appropriate actions that support the learning procedure and the Latent Dirichlet Allocation (LDA) so as to find the topics contained in the connections proposed in a learning session. Ourpurpose is also to introduce a course which moves toward the student's attempts a
Experimental activity coefficients at infinite dilution are particularly useful for calculating the parameters needed in an expression for the excess Gibbs energy. If reliable values of γ∞1 and γ∞2 are available, either from direct experiment or from a correlation, it is possible to predict the composition of the azeotrope and vapor-liquid equilibrium over the entire range of composition. These can be used to evaluate two adjustable constants in any desired expression for G E. In this study MOSCED model and SPACE model are two different methods were used to calculate γ∞1 and γ∞2
OpenStreetMap (OSM), recognised for its current and readily accessible spatial database, frequently serves regions lacking precise data at the necessary granularity. Global collaboration among OSM contributors presents challenges to data quality and uniformity, exacerbated by the sheer volume of input and indistinct data annotation protocols. This study presents a methodological improvement in the spatial accuracy of OSM datasets centred over Baghdad, Iraq, utilising data derived from OSM services and satellite imagery. An analytical focus was placed on two geometric correction methods: a two-dimensional polynomial affine transformation and a two-dimensional polynomial conformal transformation. The former involves twelve coefficients for ad
... Show MoreA Multiple System Biometric System Based on ECG Data
The communication inspiration formed an essential foundations for contribute the influence individuals and recipients, whether negatively or positively, through the messages that were published and presented in them with multiple themes and viewpoints that covered all parts of the world and all age groups; it is directed to children addressing the various stages of childhood, as it simulates many goals, including what is directed through the digital use of educational data in television production, as it is considered an intellectual and mental bag to deliver ideas and expressive and aesthetic connotations to children, where the songs and cartoons carrying data on education; within adjacent relations and in a mutual direction, both of th
... Show MoreData hiding strategies have recently gained popularity in different fields; Digital watermark technology was developed for hiding copyright information in the image visually or invisibly. Today, 3D model technology has the potential to alter the field because it allows for the production of sophisticated structures and forms that were previously impossible to achieve. In this paper, a new watermarking method for the 3D model is presented. The proposed method is based on the geometrical and topology properties of the 3D model surface to increase the security. The geometrical properties are based on computing the mean curvature for a surface and topology based on the number of edges around each vertex, the vertices
... Show MoreData mining is one of the most popular analysis methods in medical research. It involves finding patterns and correlations in previously unknown datasets. Data mining encompasses various areas of biomedical research, including data collection, clinical decision support, illness or safety monitoring, public health, and inquiry research. Health analytics frequently uses computational methods for data mining, such as clustering, classification, and regression. Studies of large numbers of diverse heterogeneous documents, including biological and electronic information, provided extensive material to medical and health studies.