Characterization of the heterogonous reservoir is complex representation and evaluation of petrophysical properties and application of the relationships between porosity-permeability within the framework of hydraulic flow units is used to estimate permeability in un-cored wells. Techniques of flow unit or hydraulic flow unit (HFU) divided the reservoir into zones laterally and vertically which can be managed and control fluid flow within flow unit and considerably is entirely different with other flow units through reservoir. Each flow unit can be distinguished by applying the relationships of flow zone indicator (FZI) method. Supporting the relationship between porosity and permeability by using flow zone indictor is carried out for evaluating the reservoir quality and identification of flow unit used in reservoir zonation. In this study, flow zone indicator has been used to identify five layers belonging to Tertiary reservoirs. Consequently, the porosity-permeability cross plot has been done depending on FZI values as groups and for each group denoted to reservoir rock types. On the other hand, extending rock type identification in un-cored wells should apply a cluster analysis approach by using well logs data. Reservoir zonation has been achieved by cluster analysis approach and for each group known as cluster which variation and different with others. Five clusters generated in this study and permeability estimated depend on these groups in un-cored wells by using well log data that gives good results compared with different empirical methods.
Imposed on foreign oil companies from important sources in the financing of the general budget in most countries of the world income tax is considered as well as be used to achieve political, economic and social goals, and has developed the concept of the tax until it became play an important role in influencing the economic conditions of a country, and the aim of this research is to statement imposed on foreign oil companies operating in Iraq in the financing of the state budget income tax contribution, as well as clarify the contracts type contracts with these companies, which is in favor of Iraq, together with the Income Tax Law No. (19) for the year / 2010, and instructed No. (5) for the year / 2011, which organized the tax process s
... Show MoreMost companies use social media data for business. Sentiment analysis automatically gathers analyses and summarizes this type of data. Managing unstructured social media data is difficult. Noisy data is a challenge to sentiment analysis. Since over 50% of the sentiment analysis process is data pre-processing, processing big social media data is challenging too. If pre-processing is carried out correctly, data accuracy may improve. Also, sentiment analysis workflow is highly dependent. Because no pre-processing technique works well in all situations or with all data sources, choosing the most important ones is crucial. Prioritization is an excellent technique for choosing the most important ones. As one of many Multi-Criteria Decision Mak
... Show MoreThe research took the spatial autoregressive model: SAR and spatial error model: SEM in an attempt to provide practical evidence that proves the importance of spatial analysis, with a particular focus on the importance of using regression models spatial and that includes all of the spatial dependence, which we can test its presence or not by using Moran test. While ignoring this dependency may lead to the loss of important information about the phenomenon under research is reflected in the end on the strength of the statistical estimation power, as these models are the link between the usual regression models with time-series models. The spatial analysis had been applied to Iraq Household Socio-Economic Survey: IHS
... Show MoreIn recent years, the Global Navigation Satellite Services (GNSS) technology has been frequently employed for monitoring the Earth crust deformation and movement. Such applications necessitate high positional accuracy that can be achieved through processing GPS/GNSS data with scientific software such as BERENSE, GAMIT, and GIPSY-OSIS. Nevertheless, these scientific softwares are sophisticated and have not been published as free open source software. Therefore, this study has been conducted to evaluate an alternative solution, GNSS online processing services, which may obtain this privilege freely. In this study, eight years of GNSS raw data for TEHN station, which located in Iran, have been downloaded from UNAVCO website
... Show MoreBackground: evaluate the effects of three different intracoronal bleaching agents on the shear bond strengths (SBS) and failure site of stainless steel and monocrystalline (sapphire) orthodontic brackets bonded to endodontically treated teeth using light cured orthodontic adhesive in vitro. Materials and methods: Eighty extracted sound human upper first premolars were selected, endondontically treated and randomly divided equally (according to the type of the brackets used) into two main groups (n = 40 per group). Each main group were subdivided (according to the bleaching agent used) into four subgroups 10 teeth each; as following : control (un bleached) group, hydrogen peroxide group (Hp) 35%, carbamide peroxide group (CP) 37% group and s
... Show MoreIn this paper, we introduce three robust fuzzy estimators of a location parameter based on Buckley’s approach, in the presence of outliers. These estimates were compared using the variance of fuzzy numbers criterion, all these estimates were best of Buckley’s estimate. of these, the fuzzy median was the best in the case of small and medium sample size, and in large sample size, the fuzzy trimmed mean was the best.
Gumbel distribution was dealt with great care by researchers and statisticians. There are traditional methods to estimate two parameters of Gumbel distribution known as Maximum Likelihood, the Method of Moments and recently the method of re-sampling called (Jackknife). However, these methods suffer from some mathematical difficulties in solving them analytically. Accordingly, there are other non-traditional methods, like the principle of the nearest neighbors, used in computer science especially, artificial intelligence algorithms, including the genetic algorithm, the artificial neural network algorithm, and others that may to be classified as meta-heuristic methods. Moreover, this principle of nearest neighbors has useful statistical featu
... Show MoreA new approach for baud time (or baud rate) estimation of a random binary signal is presented. This approach utilizes the spectrum of the signal after nonlinear processing in a way that the estimation error can be reduced by simply increasing the number of the processed samples instead of increasing the sampling rate. The spectrum of the new signal is shown to give an accurate estimate about the baud time when there is no apriory information or any restricting preassumptions. The performance of the estimator for random binary square waves perturbed by white Gaussian noise and ISI is evaluated and compared with that of the conventional estimator of the zero crossing detector.
Background: Periodontal disease is a chronic bacterial infection that affects the gingiva and bone supporting the teeth. Smoking, which is an important risk factor for periodontitis, induce oxidative stress in the body and cause an imbalance between reactive oxygen species (ROS) and antioxidants, such as superoxide dismutase (SOD). This study aimed to evaluate the influence of smoking on periodontal health status by estimating the levels of salivary SOD level in non-smokers (controls) and light and heavy smokers and to test the correlation between the SOD enzyme level and the clinical periodontal parameters in each group. Materials and Methods: The study sample consisted of 75 male, with age ranged from 35 to 50 years. Clinically, the perio
... Show MoreIn this research, we present a nonparametric approach for the estimation of a copula density using different kernel density methods. Different functions were used: Gaussian, Gumbel, Clayton, and Frank copula, and through various simulation experiments we generated the standard bivariate normal distribution at samples sizes (50, 100, 250 and 500), in both high and low dependency. Different kernel methods were used to estimate the probability density function of the copula with marginal of this bivariate distribution: Mirror – Reflection (MR), Beta Kernel (BK) and transformation kernel (KD) method, then a comparison was carried out between the three methods with all the experiments using the integrated mean squared error. Furthermore, some
... Show More