Amplitude variation with offset (AVO) analysis is an 1 efficient tool for hydrocarbon detection and identification of elastic rock properties and fluid types. It has been applied in the present study using reprocessed pre-stack 2D seismic data (1992, Caulerpa) from north-west of the Bonaparte Basin, Australia. The AVO response along the 2D pre-stack seismic data in the Laminaria High NW shelf of Australia was also investigated. Three hypotheses were suggested to investigate the AVO behaviour of the amplitude anomalies in which three different factors; fluid substitution, porosity and thickness (Wedge model) were tested. The AVO models with the synthetic gathers were analysed using log information to find which of these is the controlling parameter on the AVO analysis. AVO cross plots from the real pre-stack seismic data reveal AVO class IV (showing a negative intercept decreasing with offset). This result matches our modelled result of fluid substitution for the seismic synthetics. It is concluded that fluid substitution is the controlling parameter on the AVO analysis and therefore, the high amplitude anomaly on the seabed and the target horizon 9 is the result of changing the fluid content and the lithology along the target horizons. While changing the porosity has little effect on the amplitude variation with offset within the AVO cross plot. Finally, results from the wedge models show that a small change of thickness causes a change in the amplitude; however, this change in thickness gives a different AVO characteristic and a mismatch with the AVO result of the real 2D pre-stack seismic data. Therefore, a constant thin layer with changing fluids is more likely to be the cause of the high amplitude anomalies.
The issue of image captioning, which comprises automatic text generation to understand an image’s visual information, has become feasible with the developments in object recognition and image classification. Deep learning has received much interest from the scientific community and can be very useful in real-world applications. The proposed image captioning approach involves the use of Convolution Neural Network (CNN) pre-trained models combined with Long Short Term Memory (LSTM) to generate image captions. The process includes two stages. The first stage entails training the CNN-LSTM models using baseline hyper-parameters and the second stage encompasses training CNN-LSTM models by optimizing and adjusting the hyper-parameters of
... Show MoreThe main objective of this study is to characterize the main factors which may affect the behavior of segmental prestressed concrete beams comprised of multi segments. The 3-D finite element program ABAQUS was utilized. The experimental work was conducted on twelve simply supported segmental prestressed concrete beams divided into three groups depending on the precast segments number. They all had an identical total length of 3150mm, but each had different segment numbers (9, 7, and 5 segments), in other words, different segment lengths. To simulate the genuine fire disasters, nine beams were exposed to high-temperature flame for one hour, the selected temperatures were 300°C (572°F), 500°C (932°F) and 700°C (1292°F) as recomm
... Show MoreDifferent ANN architectures of MLP have been trained by BP and used to analyze Landsat TM images. Two different approaches have been applied for training: an ordinary approach (for one hidden layer M-H1-L & two hidden layers M-H1-H2-L) and one-against-all strategy (for one hidden layer (M-H1-1)xL, & two hidden layers (M-H1-H2-1)xL). Classification accuracy up to 90% has been achieved using one-against-all strategy with two hidden layers architecture. The performance of one-against-all approach is slightly better than the ordinary approach
Big data analysis is essential for modern applications in areas such as healthcare, assistive technology, intelligent transportation, environment and climate monitoring. Traditional algorithms in data mining and machine learning do not scale well with data size. Mining and learning from big data need time and memory efficient techniques, albeit the cost of possible loss in accuracy. We have developed a data aggregation structure to summarize data with large number of instances and data generated from multiple data sources. Data are aggregated at multiple resolutions and resolution provides a trade-off between efficiency and accuracy. The structure is built once, updated incrementally, and serves as a common data input for multiple mining an
... Show MoreAir pollution refers to the release of pollutants into the air that are detrimental to human health and the planet as a whole.In this research, the air pollutants concentration measurements such as Total Suspended Particles(TSP), Carbon Monoxides(CO),Carbon Dioxide (CO2) and meteorological parameters including temperature (T), relative humidity (RH) and wind speed & direction were conducted in Baghdad city by several stations measuring numbered (22) stations located in different regions, and were classified into (industrial, commercial and residential) stations. Using Arc-GIS program ( spatial Analyses), different maps have been prepared for the distribution of different pollutant
New microphotometer was constructed in our Laboratory Which deals with the determination of Molybdenum (VI) through its Catalysis effect on Hydrogen peroxide and potasum iodide Reaction in acid medium H2SO4 0.01 mM. Linearity of 97.3% for the range 5- 100 ppm. The repeatability of result was better than 0.8 % 0.5 ppm was obtanined as L.U. (The method applied for the determination of Molybdenum (VI) in medicinal Sample (centrum). The determination was compared well with the developed method the conventional method.
A simulation study of using 2D tomography to reconstruction a 3D object is presented. The 2D Radon transform is used to create a 2D projection for each slice of the 3D object at different heights. The 2D back-projection and the Fourier slice theorem methods are used to reconstruction each 2D projection slice of the 3D object. The results showed the ability of the Fourier slice theorem method to reconstruct the general shape of the body with its internal structure, unlike the 2D Radon method, which was able to reconstruct the general shape of the body only because of the blurring artefact, Beside that the Fourier slice theorem could not remove all blurring artefact, therefore, this research, suggested the threshold technique to eliminate the
... Show MoreEvaporation is one of the major components of the hydrological cycle in the nature, thus its accurate estimation is so important in the planning and management of the irrigation practices and to assess water availability and requirements. The aim of this study is to investigate the ability of fuzzy inference system for estimating monthly pan evaporation form meteorological data. The study has been carried out depending on 261 monthly measurements of each of temperature (T), relative humidity (RH), and wind speed (W) which have been available in Emara meteorological station, southern Iraq. Three different fuzzy models comprising various combinations of monthly climatic variables (temperature, wind speed, and relative humidity) were developed
... Show MoreIn this study, the response and behavior of machine foundations resting on dry and saturated sand was investigated experimentally. A physical model was manufactured to simulate steady state harmonic load at different operating frequencies. The effect of relative density, depth of embedment, foundation area as well as the imposed harmonic load was investigated. It was found that the amplitude of displacement of the foundation increases with increasing the amplitude of dynamic force and operating frequency meanwhile it decreases with increasing the relative density of sand, degree of saturation, depth of embedment and contact area of footing. The maximum displacement was noticed at 33.34 to 41.67 Hz. The maximum displacement amplitude respons
... Show More