A simple and highly sensitive cloud point extraction process was suggested for preconcentration of micrograms amount of isoxsuprine hydrochloride (ISX) in pure and pharmaceutical samples. After diazotization coupling of ISX with diazotized sulfadimidine in alkaline medium, the azo-dye product quantitatively extracted into the Triton X-114 rich phase, dissolved in ethanol and determined spectrophotometrically at 490 nm. The suggested reaction was studied with and without extraction and simple comparison between the batch and CPE methods was achieved. Analytical variables including concentrations of reagent, Triton X-114 and base, incubated temperature, and time were carefully studied. Under the selected optimum conditions, the linearity ranges of calibration curves were 1-9 and 0.5-8 µg/mL with detection limits of 0.26 and 0.09 µg/mL of ISX for batch and CPE methods respectively. A relative standard deviation (RSD %) best than 1.98 and 2.67 % with the percentage recoveries range 100.14 and 99.63 % were obtained for both methods respectively. The proposed methods were successfully used in routine analysis of ISX in pharmaceutical forms with high accuracy and reproducibility.
One of the main techniques to achieve phase behavior calculations of reservoir fluids is the equation of state. Soave - Redlich - Kwong equation of state can then be used to predict the phase behavior of the petroleum fluids by treating it as a multi-components system of pure and pseudo-components. The use of Soave – Redlich – Kwon equation of state is popular in the calculations of petroleum engineering therefore many researchers used it to perform phase behavior analysis for reservoir fluids (Wang and Orr (2000), Ertekin and Obut (2003), Hasan (2004) and Haghtalab (2011))
This paper presents a new flash model for reservoir fluids in gas – oil se
The current research aims to know the effect of teaching using multiple intelligences theory on academic achievement for students of primary school. The sample search of pupils . The research sample was divided into two groups where the first group represented the experimental group which studied the use of multiple intelligences and the second group represented the control group which studied the use of the traditional way . The search tool consisted of achievement test. Showed search results, there are statistically significant differences(0.05) between the average scores of students who have studied according to multiple intelligences between the average scores of students who have studied in accordance with the tradition way in the p
... Show MoreNew microphotometer was constructed in our Laboratory Which deals with the determination of Molybdenum (VI) through its Catalysis effect on Hydrogen peroxide and potasum iodide Reaction in acid medium H2SO4 0.01 mM. Linearity of 97.3% for the range 5- 100 ppm. The repeatability of result was better than 0.8 % 0.5 ppm was obtanined as L.U. (The method applied for the determination of Molybdenum (VI) in medicinal Sample (centrum). The determination was compared well with the developed method the conventional method.
A group of acceptance sampling to testing the products was designed when the life time of an item follows a log-logistics distribution. The minimum number of groups (k) required for a given group size and acceptance number is determined when various values of Consumer’s Risk and test termination time are specified. All the results about these sampling plan and probability of acceptance were explained with tables.
In this paper, we will provide a proposed method to estimate missing values for the Explanatory variables for Non-Parametric Multiple Regression Model and compare it with the Imputation Arithmetic mean Method, The basis of the idea of this method was based on how to employ the causal relationship between the variables in finding an efficient estimate of the missing value, we rely on the use of the Kernel estimate by Nadaraya – Watson Estimator , and on Least Squared Cross Validation (LSCV) to estimate the Bandwidth, and we use the simulation study to compare between the two methods.
In this work , an effective procedure of Box-Behnken based-ANN (Artificial Neural Network) and GA (Genetic Algorithm) has been utilized for finding the optimum conditions of wt.% of doping elements (Ce,Y, and Ge) doped-aluminizing-chromizing of Incoloy 800H . ANN and Box-Behnken design method have been implanted for minimizing hot corrosion rate kp (10-12g2.cm-4.s-1) in Incoloy 800H at 900oC . ANN was used for estimating the predicted values of hot corrosion rate kp (10-12g2.cm-4.s-1) . The optimal wt.% of doping elements combination to obtain minimum hot corrosion rate was calculated using genetic alg
... Show MoreThe research aimed to modeling a structural equation for tourist attraction factors in Asir Region. The research population is the people in the region, and a simple random sample of 332 individuals were selected. The factor analysis as a reliable statistical method in this phenomenon was used to modeling and testing the structural model of tourism, and analyzing the data by using SPSS and AMOS statistical computerized programs. The study reached a number of results, the most important of them are: the tourist attraction factors model consists of five factors which explain 69.3% of the total variance. These are: the provision of tourist services, social and historic factors, mountains, weather and natural parks. And the differenc
... Show MoreThe problem of Bi-level programming is to reduce or maximize the function of the target by having another target function within the constraints. This problem has received a great deal of attention in the programming community due to the proliferation of applications and the use of evolutionary algorithms in addressing this kind of problem. Two non-linear bi-level programming methods are used in this paper. The goal is to achieve the optimal solution through the simulation method using the Monte Carlo method using different small and large sample sizes. The research reached the Branch Bound algorithm was preferred in solving the problem of non-linear two-level programming this is because the results were better.