Knowledge of permeability is critical for developing an effective reservoir description. Permeability data may be calculated from well tests, cores and logs. Normally, using well log data to derive estimates of permeability is the lowest cost method. This paper will focus on the evaluation of formation permeability in un-cored intervals for Abughirab field/Asmari reservoir in Iraq from core and well log data. Hydraulic flow unit (HFU) concept is strongly related to the flow zone indicator (FZI) which is a function of the reservoir quality index (RQI). Both measures are based on porosity and permeability of cores. It is assumed that samples with similar FZI values belong to the same HFU. A generated method is also used to calculate permeability in un-cored zones depending on matrix density grouping, where each group has its own permeability-porosity correlation. After applying the both methods and correlating the calculated permeability with the core permeability data it revealed that matrix density grouping is the best method to calculate permeability in un-cored zones and it is better than FZI method in this field, then the estimated permeability is distributed through the members of Asmari reservoir in Abughirab field and it is concluded that permeability in this field is generally increases toward south culmination of Abughirab field.
The research involved a rapid, automated and highly accurate developed CFIA/MZ technique for estimation of phenylephrine hydrochloride (PHE) in pure, dosage forms and biological sample. This method is based on oxidative coupling reaction of 2,4-dinitrophenylhydrazine (DNPH) with PHE in existence of sodium periodate as oxidizing agent in alkaline medium to form a red colored product at ʎmax )520 nm (. A flow rate of 4.3 mL.min-1 using distilled water as a carrier, the method of FIA proved to be as a sensitive and economic analytical tool for estimation of PHE.
Within the concentration range of 5-300 μg.mL-1, a calibration curve was rectilinear, where the detection limit was 3.252 μg.mL
This paper presents a new algorithm in an important research field which is the semantic word similarity estimation. A new feature-based algorithm is proposed for measuring the word semantic similarity for the Arabic language. It is a highly systematic language where its words exhibit elegant and rigorous logic. The score of sematic similarity between two Arabic words is calculated as a function of their common and total taxonomical features. An Arabic knowledge source is employed for extracting the taxonomical features as a set of all concepts that subsumed the concepts containing the compared words. The previously developed Arabic word benchmark datasets are used for optimizing and evaluating the proposed algorithm. In this paper,
... Show MoreIn this paper , an efficient new procedure is proposed to modify third –order iterative method obtained by Rostom and Fuad [Saeed. R. K. and Khthr. F.W. New third –order iterative method for solving nonlinear equations. J. Appl. Sci .7(2011): 916-921] , using three steps based on Newton equation , finite difference method and linear interpolation. Analysis of convergence is given to show the efficiency and the performance of the new method for solving nonlinear equations. The efficiency of the new method is demonstrated by numerical examples.
Most of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve B
... Show MoreIn data transmission a change in single bit in the received data may lead to miss understanding or a disaster. Each bit in the sent information has high priority especially with information such as the address of the receiver. The importance of error detection with each single change is a key issue in data transmission field.
The ordinary single parity detection method can detect odd number of errors efficiently, but fails with even number of errors. Other detection methods such as two-dimensional and checksum showed better results and failed to cope with the increasing number of errors.
Two novel methods were suggested to detect the binary bit change errors when transmitting data in a noisy media.Those methods were: 2D-Checksum me
Background: Large amounts of oily wastewater and its derivatives are discharged annually from several industries to the environment. Objective: The present study aims to investigate the ability to remove oil content and turbidity from real oily wastewater discharged from the wet oil's unit (West Qurna 1-Crude Oil Location/ Basra-Iraq) by using an innovated electrocoagulation reactor containing concentric aluminum tubes in a monopolar mode. Methods: The influences of the operational variables (current density (1.77-7.07 mA/cm2) and electrolysis time (10-40 min)) were studied using response surface methodology (RSM) and Minitab-17 statistical program. The agitation speed was taken as 200 rpm. Energy and electrodes consumption had been studi
... Show MoreBN Rashid
Diabetes is one of the increasing chronic diseases, affecting millions of people around the earth. Diabetes diagnosis, its prediction, proper cure, and management are compulsory. Machine learning-based prediction techniques for diabetes data analysis can help in the early detection and prediction of the disease and its consequences such as hypo/hyperglycemia. In this paper, we explored the diabetes dataset collected from the medical records of one thousand Iraqi patients. We applied three classifiers, the multilayer perceptron, the KNN and the Random Forest. We involved two experiments: the first experiment used all 12 features of the dataset. The Random Forest outperforms others with 98.8% accuracy. The second experiment used only five att
... Show MoreThe purpose of this research is to identify the effect of the use of project-based learning in the development of intensive reading skills at middle school students. The experimental design was chosen from one group to suit the nature of the research and its objectives. The research group consisted of 35 students. For the purpose of the research, the following materials and tools were prepared: (List of intensive reading skills, intensive reading skills test, teacher's guide, student book). The results of the study showed that there were statistically significant differences at (0.05) in favor of the post-test performance of intensive reading skills. The statistical analysis also showed that the project-based learning approach has a high
... Show More