Seismic inversion technique is applied to 3D seismic data to predict porosity property for carbonate Yamama Formation (Early Cretaceous) in an area located in southern Iraq. A workflow is designed to guide the manual procedure of inversion process. The inversion use a Model Based Inversion technique to convert 3D seismic data into 3D acoustic impedance depending on low frequency model and well data is the first step in the inversion with statistical control for each inversion stage. Then, training the 3D acoustic impedance volume, seismic data and porosity wells data with multi attribute transforms to find the best statistical attribute that is suitable to invert the point direct measurement of porosity from well to 3D porosity distributed volume. The final subsurface porosity model greatly improves the understanding of the distribution of porosity in the reservoir zones and showing the variations of porosity both vertically and laterally. The success of the prepared workflow encourage the transformation it automatically to run the same workflow faster for the areas that have the same characteristics of carbonate Yamama Formation.
An eco-epidemic model is proposed in this paper. It is assumed that there is a stage structure in prey and disease in predator. Existence, uniqueness and bounded-ness of the solution for the system are studied. The existence of each possible steady state points is discussed. The local condition for stability near each steady state point is investigated. Finally, global dynamics of the proposed model is studied numerically.
The aim of the research is to identify the cognitive method (rigidity flexibility) of third-stage students in the collage of Physical Education and Sports Sciences at The University of Baghdad, as well as to recognize the impact of using the McCarthy model in learning some of skills in gymnastics, as well as to identify the best groups in learning skills, the experimental curriculum was used to design equal groups with pre test and post test and the research community was identified by third-stage students in academic year (2020-2021), the subject was randomly selected two divisions after which the measure of cognitive method was distributed to the sample, so the subject (32) students were distributed in four groups, and which the pre te
... Show MoreSteganography is a technique of concealing secret data within other quotidian files of the same or different types. Hiding data has been essential to digital information security. This work aims to design a stego method that can effectively hide a message inside the images of the video file. In this work, a video steganography model has been proposed through training a model to hiding video (or images) within another video using convolutional neural networks (CNN). By using a CNN in this approach, two main goals can be achieved for any steganographic methods which are, increasing security (hardness to observed and broken by used steganalysis program), this was achieved in this work as the weights and architecture are randomized. Thus,
... Show MoreMixture experiments are response variables based on the proportions of component for this mixture. In our research we will compare the scheffʼe model with the kronecker model for the mixture experiments, especially when the experimental area is restricted.
Because of the experience of the mixture of high correlation problem and the problem of multicollinearity between the explanatory variables, which has an effect on the calculation of the Fisher information matrix of the regression model.
to estimate the parameters of the mixture model, we used the (generalized inverse ) And the Stepwise Regression procedure
... Show MoreAbstract
Although the rapid development in reverse engineering techniques, 3D laser scanners can be considered the modern technology used to digitize the 3D objects, but some troubles may be associate this process due to the environmental noises and limitation of the used scanners. So, in the present paper a data pre-processing algorithm has been proposed to obtain the necessary geometric features and mathematical representation of scanned object from its point cloud which obtained using 3D laser scanner (Matter and Form) through isolating the noised points. The proposed algorithm based on continuous calculations of chord angle between each adjacent pair of points in point cloud. A MATLAB program has been built t
... Show MoreThe rapid development of telemedicine services and the requirements for exchanging medical information between physicians, consultants, and health institutions have made the protection of patients’ information an important priority for any future e-health system. The protection of medical information, including the cover (i.e. medical image), has a specificity that slightly differs from the requirements for protecting other information. It is necessary to preserve the cover greatly due to its importance on the reception side as medical staff use this information to provide a diagnosis to save a patient's life. If the cover is tampered with, this leads to failure in achieving the goal of telemedicine. Therefore, this work provides an in
... Show MoreThe research dealt with a comparative study between some semi-parametric estimation methods to the Partial linear Single Index Model using simulation. There are two approaches to model estimation two-stage procedure and MADE to estimate this model. Simulations were used to study the finite sample performance of estimating methods based on different Single Index models, error variances, and different sample sizes , and the mean average squared errors were used as a comparison criterion between the methods were used. The results showed a preference for the two-stage procedure depending on all the cases that were used
The influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show More