Predicting vertical stress was indeed useful for controlling geomechanical issues since it allowed for the computation of pore pressure for the formation and the classification of fault regimes. This study provides an in-depth observation of vertical stress prediction utilizing numerous approaches using the Techlog 2015 software. Gardner's method results in incorrect vertical stress values with a problem that this method doesn't start from the surface and instead relies only on sound log data. Whereas the Amoco, Wendt non-acoustic, Traugott, average technique simply needed density log as input and used a straight line as the observed density, this was incorrect for vertical computing stress. The results of these methods show that extrapolated density measurement used an average for the real density. The gradient of an extrapolated method is much better in shallow depth into the vertical stress calculations. The Miller density method had an excellent fit with the real density in deep depth. It has been crucial to calculate vertical stress for the past 40 years because calculating pore pressure and geomechanical building models have employed vertical stress as input. The strongest predictor of vertical stress may have been bulk density. According to these results, the miller and extrapolated techniques may be the best two methods for determining vertical stress. Still, the gradient of an extrapolated method is much more excellent in shallow depth than the miller method. Extrapolated density approach may produce satisfactory results for vertical stress, while miller values are lower than those obtained by extrapolating. This may be due to the poor gradient of this method at shallow depths. Gardner's approach incorrectly displays minimum values of about 4000 psi at great depths. While other methods provide numbers that are similar because these methods use constant bulk density values that start at the surface and continue to the desired depth, this is incorrect.
Estimations of average crash density as a function of traffic elements and characteristics can be used for making good decisions relating to planning, designing, operating, and maintaining roadway networks. This study describes the relationships between total, collision, turnover, and runover accident densities with factors such as hourly traffic flow and average spot speed on multilane rural highways in Iraq. The study is based on data collected from two sources: police stations and traffic surveys. Three highways are selected in Wassit governorate as a case study to cover the studied locations of the accidents. Three highways are selected in Wassit governorate as a case study to cover the studied locations of the accidents. The selection
... Show MoreEstimations of average crash density as a function of traffic elements and characteristics can be used for making good decisions relating to planning, designing, operating, and maintaining roadway networks. This study describes the relationships between total, collision, turnover, and runover accident densities with factors such as hourly traffic flow and average spot speed on multilane rural highways in Iraq. The study is based on data collected from two sources: police stations and traffic surveys. Three highways are selected in Wassit governorate as a case study to cover the studied locations of the accidents. Three highways are selected in Wassit governorate as a case study to cover the studied locations of the accidents. The se
... Show MoreThere are many techniques for face recognition which compare the desired face image with a set of faces images stored in a database. Most of these techniques fail if faces images are exposed to high-density noise. Therefore, it is necessary to find a robust method to recognize the corrupted face image with a high density noise. In this work, face recognition algorithm was suggested by using the combination of de-noising filter and PCA. Many studies have shown that PCA has ability to solve the problem of noisy images and dimensionality reduction. However, in cases where faces images are exposed to high noise, the work of PCA in removing noise is useless, therefore adding a strong filter will help to im
... Show MoreIn this research, the focus was placed on estimating the parameters of the Hypoexponential distribution function using the maximum likelihood method and genetic algorithm. More than one standard, including MSE, has been adopted for comparison by Using the simulation method
The ground state proton, neutron and matter densities and
corresponding root mean square radii of unstable proton-rich 17Ne
and 27P exotic nuclei are studied via the framework of the twofrequency
shell model. The single particle harmonic oscillator wave
functions are used in this model with two different oscillator size
parameters core b and halo , b the former for the core (inner) orbits
whereas the latter for the halo (outer) orbits. Shell model calculations
for core nucleons and for outer (halo) nucleons in exotic nuclei are
performed individually via the computer code OXBASH. Halo
structure of 17Ne and 27P nuclei is confirmed. It is found that the
structure of 17Ne and 27P nuclei have 2
5 / 2 (1d ) an
Design sampling plan was and still one of most importance subjects because it give lowest cost comparing with others, time live statistical distribution should be known to give best estimators for parameters of sampling plan and get best sampling plan.
Research dell with design sampling plan when live time distribution follow Logistic distribution with () as location and shape parameters, using these information can help us getting (number of groups, sample size) associated with reject or accept the Lot
Experimental results for simulated data shows the least number of groups and sample size needs to reject or accept the Lot with certain probability of
... Show MoreIn this study we examine variations in the structure of perovskite compounds of LaBa2Cu2O9, LaBa2CaCu3O12 and LaBa2Ca2Cu5O15 synthesized using the solid state reaction method. The samples’ compositions were assessed using X-ray fluorescence (XRF) analysis. The La: Ba: Ca: Cu ratios for samples LaBa2Cu2O9, LaBa2CaCu3O12 and LaBa2Ca2Cu5O15 were found by XRF analysis to be around 1:2:0:2, 1:2:1:3, and 1:2:2:5, respectively. The samples’ well-known structures were then analyzed using X-ray diffraction. The three samples largely consist of phases 1202, 1213, and 1225, with a trace quantity of an unknown secondary phase, based on the intensities and locations of the diffraction peaks. According to the measured parameters a, b, and c, every sa
... Show MoreThe main problem when dealing with fuzzy data variables is that it cannot be formed by a model that represents the data through the method of Fuzzy Least Squares Estimator (FLSE) which gives false estimates of the invalidity of the method in the case of the existence of the problem of multicollinearity. To overcome this problem, the Fuzzy Bridge Regression Estimator (FBRE) Method was relied upon to estimate a fuzzy linear regression model by triangular fuzzy numbers. Moreover, the detection of the problem of multicollinearity in the fuzzy data can be done by using Variance Inflation Factor when the inputs variable of the model crisp, output variable, and parameters are fuzzed. The results were compared usin
... Show MoreIn recent decades, tremendous success has been achieved in the advancement of chemical admixtures for Portland cement concrete. Most efforts have concentrated on improving the properties of concrete and studying the factors that influence on these properties. Since the compressive strength is considered a valuable property and is invariably a vital element of the structural design, especially high early strength development which can be provide more benefits in concrete production, such as reducing construction time and labor and saving the formwork and energy. As a matter of fact, it is influenced as a most properties of concrete by several factors including water-cement ratio, cement type and curing methods employed.
Because of acce
In this work, a joint quadrature for numerical solution of the double integral is presented. This method is based on combining two rules of the same precision level to form a higher level of precision. Numerical results of the present method with a lower level of precision are presented and compared with those performed by the existing high-precision Gauss-Legendre five-point rule in two variables, which has the same functional evaluation. The efficiency of the proposed method is justified with numerical examples. From an application point of view, the determination of the center of gravity is a special consideration for the present scheme. Convergence analysis is demonstrated to validate the current method.