Rock mechanical properties are critical parameters for many development techniques related to tight reservoirs, such as hydraulic fracturing design and detecting failure criteria in wellbore instability assessment. When direct measurements of mechanical properties are not available, it is helpful to find sufficient correlations to estimate these parameters. This study summarized experimentally derived correlations for estimating the shear velocity, Young's modulus, Poisson's ratio, and compressive strength. Also, a useful correlation is introduced to convert dynamic elastic properties from log data to static elastic properties. Most of the derived equations in this paper show good fitting to measured data, while some equations show scatters in correlating the data due to the presence of Calcite, Quartz, and clay in some core samples. Brittleness index (BRI) indicates ductile behavior of the core samples is also studied for the interested reservoir. The results of BRI show that the samplers range from moderate to high brittleness, and the difference in BRI comes from the presence of some minerals, as explained using the X-ray diffraction test (XRD). The proposed correlations are compared to other correlations from literature for validation, and the results of the comparison show good matching that explains the accuracy of the proposed equations.
Spatial data analysis is performed in order to remove the skewness, a measure of the asymmetry of the probablitiy distribution. It also improve the normality, a key concept of statistics from the concept of normal distribution “bell shape”, of the properties like improving the normality porosity, permeability and saturation which can be are visualized by using histograms. Three steps of spatial analysis are involved here; exploratory data analysis, variogram analysis and finally distributing the properties by using geostatistical algorithms for the properties. Mishrif Formation (unit MB1) in Nasiriya Oil Field was chosen to analyze and model the data for the first eight wells. The field is an anticline structure with northwest- south
... Show MoreGeographic Information Systems (GIS) are obtaining a significant role in handling strategic applications in which data are organized as records of multiple layers in a database. Furthermore, GIS provide multi-functions like data collection, analysis, and presentation. Geographic information systems have assured their competence in diverse fields of study via handling various problems for numerous applications. However, handling a large volume of data in the GIS remains an important issue. The biggest obstacle is designing a spatial decision-making framework focused on GIS that manages a broad range of specific data to achieve the right performance. It is very useful to support decision-makers by providing GIS-based decision support syste
... Show MoreSteganography is defined as hiding confidential information in some other chosen media without leaving any clear evidence of changing the media's features. Most traditional hiding methods hide the message directly in the covered media like (text, image, audio, and video). Some hiding techniques leave a negative effect on the cover image, so sometimes the change in the carrier medium can be detected by human and machine. The purpose of suggesting hiding information is to make this change undetectable. The current research focuses on using complex method to prevent the detection of hiding information by human and machine based on spiral search method, the Structural Similarity Index Metrics measures are used to get the accuracy and quality
... Show MoreClassification of imbalanced data is an important issue. Many algorithms have been developed for classification, such as Back Propagation (BP) neural networks, decision tree, Bayesian networks etc., and have been used repeatedly in many fields. These algorithms speak of the problem of imbalanced data, where there are situations that belong to more classes than others. Imbalanced data result in poor performance and bias to a class without other classes. In this paper, we proposed three techniques based on the Over-Sampling (O.S.) technique for processing imbalanced dataset and redistributing it and converting it into balanced dataset. These techniques are (Improved Synthetic Minority Over-Sampling Technique (Improved SMOTE), Border
... Show MoreThis paper proposes feedback linearization control (FBLC) based on function approximation technique (FAT) to regulate the vibrational motion of a smart thin plate considering the effect of axial stretching. The FBLC includes designing a nonlinear control law for the stabilization of the target dynamic system while the closedloop dynamics are linear with ensured stability. The objective of the FAT is to estimate the cubic nonlinear restoring force vector using the linear parameterization of weighting and orthogonal basis function matrices. Orthogonal Chebyshev polynomials are used as strong approximators for adaptive schemes. The proposed control architecture is applied to a thin plate with a large deflection that stimulates the axial loadin
... Show MoreCalculating the Inverse Kinematic (IK) equations is a complex problem due to the nonlinearity of these equations. Choosing the end effector orientation affects the reach of the target location. The Forward Kinematics (FK) of Humanoid Robotic Legs (HRL) is determined by using DenavitHartenberg (DH) method. The HRL has two legs with five Degrees of Freedom (DoF) each. The paper proposes using a Particle Swarm Optimization (PSO) algorithm to optimize the best orientation angle of the end effector of HRL. The selected orientation angle is used to solve the IK equations to reach the target location with minimum error. The performance of the proposed method is measured by six scenarios with different simulated positions of the legs. The proposed
... Show MoreThe influx of data in bioinformatics is primarily in the form of DNA, RNA, and protein sequences. This condition places a significant burden on scientists and computers. Some genomics studies depend on clustering techniques to group similarly expressed genes into one cluster. Clustering is a type of unsupervised learning that can be used to divide unknown cluster data into clusters. The k-means and fuzzy c-means (FCM) algorithms are examples of algorithms that can be used for clustering. Consequently, clustering is a common approach that divides an input space into several homogeneous zones; it can be achieved using a variety of algorithms. This study used three models to cluster a brain tumor dataset. The first model uses FCM, whic
... Show More