Accurate predictive tools for VLE calculation are always needed. A new method is introduced for VLE calculation which is very simple to apply with very good results compared with previously used methods. It does not need any physical property except each binary system need tow constants only. Also, this method can be applied to calculate VLE data for any binary system at any polarity or from any group family. But the system binary should not confirm an azeotrope. This new method is expanding in application to cover a range of temperature. This expansion does not need anything except the application of the new proposed form with the system of two constants. This method with its development is applied to 56 binary mixtures with 1120 equilibrium data point with very good accuracy. The developments of this method are applied on 13 binary systems at different temperatures which gives very good accuracy.
A sensitive spectrofluorimetric method for the determination of glibenclamide in its tablet formulations has been proposed. The method is based on the dissolving of glibenclamide in absolute ethanol and measuring the native fluorescence at 354 nm after excitation at 302 nm. Beers law is obeyed in the concentration of 1.4 to 10 µg.ml-1 of glibenclamide with a limit of detection (LD) of 0.067 µg.ml-1 and a standard deviation of 0.614. The range percent recoveries (N=3) is 94 - 103.
The CdS quantum dots were prepared by chemical reaction
of cadmium oleylamine (Cd –oleylamine complex) with the
sulfite-oleylamine (S-oleylamine) with 1:6 mole ratios. The
optical properties structure and spectroscopy of the product
quantum dot were studied. The results show the dependence of the
optical properties on the crystal dimension and the formation of
the trap states in the energy band gap.
The Aim of this paper is to investigate numerically the simulation of ice melting in one and two dimension using the cell-centered finite volume method. The mathematical model is based on the heat conduction equation associated with a fixed grid, latent heat source approach. The fully implicit time scheme is selected to represent the time discretization. The ice conductivity is chosen
to be the value of the approximated conductivity at the interface between adjacent ice and water control volumes. The predicted temperature distribution, percentage melt fraction, interface location and its velocity is compared with those obtained from the exact analytical solution. A good agreement is obtained when comparing the numerical results of one
Submerged arc welding (SAW) process is an essential metal joining processes in industry. The quality of weld is a very important working aspect for the manufacturing and construction industries, the challenges are made optimal process environment. Design of experimental using Taguchi method (L9 orthogonal array (OA)) considering three SAW parameter are (welding current, arc voltage and welding speed) and three levels (300-350-400 Amp. , 32-36-40 V and 26-28-30 cm/min). The study was done on SAW process parameters on the mechanical properties of steel type comply with (ASTM A516 grade 70). Signal to Noise ratio (S/N) was computed to calculate the optimal process parameters. Percentage contributions of each parameter are validated by using an
... Show MoreThis paper focuses on the optimization of drilling parameters by utilizing “Taguchi method” to obtain the minimum surface roughness. Nine drilling experiments were performed on Al 5050 alloy using high speed steel twist drills. Three drilling parameters (feed rates, cutting speeds, and cutting tools) were used as control factors, and L9 (33) “orthogonal array” was specified for the experimental trials. Signal to Noise (S/N) Ratio and “Analysis of Variance” (ANOVA) were utilized to set the optimum control factors which minimized the surface roughness. The results were tested with the aid of statistical software package MINITAB-17. After the experimental trails, the tool diameter was found as the most important facto
... Show MoreA UV-Vis spectrophotometry method was developed for the determination of metoclopramide hydrochloride in pure and several pharmaceutical preparations, such as Permosan tablets, Meclodin syrups, and Plasil ampoules. The method is based on the diazotization reaction of metoclopramide hydrochloride with sodium nitrate and hydrochloric acid to yield the diazonium salt, which is then reacted with 3,5-dimethyl phenol in the presence of sodium hydroxide to form a yellow azo dye. Calibration curves were linear in the range from 0.3 to 6.5 µg/mL, with a correlation coefficient of 0.9993. The limits of detection and quantification were determined and found to be 0.18 and 0.61 µg/mL, respectively. Accuracy and precision were also determined b
... Show MoreBackground: Appreciation of the crucial role of risk factors in the development of coronary artery disease (CAD) is one of the most significant advances in the understanding of this important disease. Extensive epidemiological research has established cigarette smoking, diabetes, hyperlipidemia, and hypertension as independent risk factors for CADObjective: To determine the prevalence of the 4 conventional risk factors(cigarette smoking, diabetes, hyperlipidemia, and hypertension) among patients with CAD and to determine the correlation of Thrombolysis in Myocardial Infarction (TIMI) risk score with the extent of coronary artery disease (CAD) in patients with unstable angina /non ST elevation myocardial infarction (UA/NSTEMI).Methods: We
... Show MoreThe objective of the study is to demonstrate the predictive ability is better between the logistic regression model and Linear Discriminant function using the original data first and then the Home vehicles to reduce the dimensions of the variables for data and socio-economic survey of the family to the province of Baghdad in 2012 and included a sample of 615 observation with 13 variable, 12 of them is an explanatory variable and the depended variable is number of workers and the unemployed.
Was conducted to compare the two methods above and it became clear by comparing the logistic regression model best of a Linear Discriminant function written
... Show MoreMultiple eliminations (de-multiple) are one of seismic processing steps to remove their effects and delineate the correct primary refractors. Using normal move out to flatten primaries is the way to eliminate multiples through transforming these data to frequency-wavenumber domain. The flatten primaries are aligned with zero axis of the frequency-wavenumber domain and any other reflection types (multiples and random noise) are distributed elsewhere. Dip-filter is applied to pass the aligned data and reject others will separate primaries from multiple after transforming the data back from frequency-wavenumber domain to time-distance domain. For that, a suggested name for this technique as normal move out- frequency-wavenumber domain
... Show More