Amplitude variation with offset (AVO) analysis is an 1 efficient tool for hydrocarbon detection and identification of elastic rock properties and fluid types. It has been applied in the present study using reprocessed pre-stack 2D seismic data (1992, Caulerpa) from north-west of the Bonaparte Basin, Australia. The AVO response along the 2D pre-stack seismic data in the Laminaria High NW shelf of Australia was also investigated. Three hypotheses were suggested to investigate the AVO behaviour of the amplitude anomalies in which three different factors; fluid substitution, porosity and thickness (Wedge model) were tested. The AVO models with the synthetic gathers were analysed using log information to find which of these is the controlling parameter on the AVO analysis. AVO cross plots from the real pre-stack seismic data reveal AVO class IV (showing a negative intercept decreasing with offset). This result matches our modelled result of fluid substitution for the seismic synthetics. It is concluded that fluid substitution is the controlling parameter on the AVO analysis and therefore, the high amplitude anomaly on the seabed and the target horizon 9 is the result of changing the fluid content and the lithology along the target horizons. While changing the porosity has little effect on the amplitude variation with offset within the AVO cross plot. Finally, results from the wedge models show that a small change of thickness causes a change in the amplitude; however, this change in thickness gives a different AVO characteristic and a mismatch with the AVO result of the real 2D pre-stack seismic data. Therefore, a constant thin layer with changing fluids is more likely to be the cause of the high amplitude anomalies.
This research shows the experimental results of the bending moment in a flexible and rigid raft foundation rested on dense sandy soil with different embedded depth throughout 24 tests. A physical model of dimensions (200mm*200mm) and (320) mm in height was constructed with raft foundation of (10) mm thickness for flexible raft and (23) mm for rigid raft made of reinforced concrete. To imitate the seismic excitation shaking table skill was applied, the shaker was adjusted to three frequencies equal to (1Hz,2Hz, and 3Hz) and displacement magnitude of (13) mm, the foundation was located at four different embedment depths (0,0.25B = 50mm,0.5B = 100mm, and B = 200mm), where B is the raft width. Generally, the maximum bending
... Show MoreA newly developed analytical method was conducted for the determination of Ketotifen fumarate (KTF) in pharmaceuticals drugs via quenching of continuous fluorescence of 9(10H)-Acridone (ACD). The method was applied using flow injection system of a new homemade ISNAG fluorimeter with fluorescence measurements at ± 90◦ via 2×4 solar cell. The calibration graph was linear in the range of 1-45 mmol/L, with correlation coefficient r = 0.9762 and the limit of detection 29.785 µg/sample from the stepwise dilution for the minimum concentration in the linear dynamic ranged of the calibration graph. The method was successfully applied to the determination of Ketotifen fumarate in two different pharma
... Show MoreThe objective of this work is to study the influence of end milling cutting process parameters, tool material and geometry on multi-response outputs for 4032 Al-alloy. This can be done by proposing an approach that combines Taguchi method with grey relational analysis. Three cutting parameters have been selected (spindle speed, feed rate and cut depth) with three levels for each parameter. Three tools with different materials and geometry have been also used to design the experimental tests and runs based on matrix L9. The end milling process with several output characteristics is solved using a grey relational analysis. The results of analysis of variance (ANOVA) showed that the major influencing parameters on multi-objective response w
... Show MoreWellbore instability problems cause nonproductive time, especially during drilling operations in the shale formations. These problems include stuck pipe, caving, lost circulation, and the tight hole, requiring more time to treat and therefore additional costs. The extensive hole collapse problem is considered one of the main challenges experienced when drilling in the Zubair shale formation. In turn, it is caused by nonproductive time and increasing well drilling expenditure. In this study, geomechanical modeling was used to determine a suitable mud weight window to overpass these problems and improve drilling performance for well development. Three failure criteria, including Mohr–Coulomb, modifie
The cross section evaluation for (α,n) reaction was calculated according to the available International Atomic Energy Agency (IAEA) and other experimental published data . These cross section are the most recent data , while the well known international libraries like ENDF , JENDL , JEFF , etc. We considered an energy range from threshold to 25 M eV in interval (1 MeV). The average weighted cross sections for all available experimental and theoretical(JENDL) data and for all the considered isotopes was calculated . The cross section of the element is then calculated according to the cross sections of the isotopes of that element taking into account their abundance . A mathematical representative equation for each of the element
... Show MoreThe influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show MoreTraffic classification is referred to as the task of categorizing traffic flows into application-aware classes such as chats, streaming, VoIP, etc. Most systems of network traffic identification are based on features. These features may be static signatures, port numbers, statistical characteristics, and so on. Current methods of data flow classification are effective, they still lack new inventive approaches to meet the needs of vital points such as real-time traffic classification, low power consumption, ), Central Processing Unit (CPU) utilization, etc. Our novel Fast Deep Packet Header Inspection (FDPHI) traffic classification proposal employs 1 Dimension Convolution Neural Network (1D-CNN) to automatically learn more representational c
... Show More