A fault is an error that has effects on system behaviour. A software metric is a value that represents the degree to which software processes work properly and where faults are more probable to occur. In this research, we study the effects of removing redundancy and log transformation based on threshold values for identifying faults-prone classes of software. The study also contains a comparison of the metric values of an original dataset with those after removing redundancy and log transformation. E-learning and system dataset were taken as case studies. The fault ratio ranged from 1%-31% and 0%-10% for the original dataset and 1%-10% and 0%-4% after removing redundancy and log transformation, respectively. These results impacted directly the number of classes detected, which ranged between 1-20 and 1-7 for the original dataset and 1-7 and 0-3) after removing redundancy and log transformation. The Skewness of the dataset was deceased after applying the proposed model. The classified faulty classes need more attention in the next versions in order to reduce the ratio of faults or to do refactoring to increase the quality and performance of the current version of the software.
In this research, the nonparametric technique has been presented to estimate the time-varying coefficients functions for the longitudinal balanced data that characterized by observations obtained through (n) from the independent subjects, each one of them is measured repeatedly by group of specific time points (m). Although the measurements are independent among the different subjects; they are mostly connected within each subject and the applied techniques is the Local Linear kernel LLPK technique. To avoid the problems of dimensionality, and thick computation, the two-steps method has been used to estimate the coefficients functions by using the two former technique. Since, the two-
... Show MoreIn this research, a group of complexes were prepared which were derived from Schiff base ligands, which is called (1E,1'E)-1,1'-(1,2-phenylene)bis(N-(2,4-dichlorophenyl) methanimine) (L) with ortho-phenanthroline (o-phen).The prepared complexes areM(II) [Co(II),Ni(II),Cu(II), Zn(II), Cd(II),and Hg(II)].A range of spectroscopic and technical techniques have been used to characterizethese materials, including:The FTIR, 1H-NMR, LC-Mass Spectrum, UV-Visbale, molar conductance, and magnaticmoment, atomic absorbtion, chlorid contents. Spectral results obtainedare showen that (ortho-phen) and (L) behave as neutral coordinating to the central metal ion by the donatingatoms(N2)of the both compounds. The geometry sha
... Show MoreA simple reverse-phase high performance liquid chromatographic method for the simultaneous analysis (separation and quantification) of furosemide (FURO), carbamazepine (CARB), diazepam (DIAZ) and carvedilol (CARV) has been developed and validated. The method was carried out on a NUCLEODUR® 100-5 C18ec column (250 x 4.6 mm, i. d.5μm), with a mobile phase comprising of acetonitrile: deionized water (50: 50 v/v, pH adjusted to 3.6 ±0.05 with acetic acid) at a flow rate 1.5 mL.min-1 and the quantification was achieved at 226 nm. The retention times of FURO, CARB, DIAZ and CARV were found to be 1.90 min, 2.79 min, 5.39 min and 9.56 min respectively. The method was validated in terms of linearity, accuracy, precision, limit of detection and li
... Show MoreGenerally, statistical methods are used in various fields of science, especially in the research field, in which Statistical analysis is carried out by adopting several techniques, according to the nature of the study and its objectives. One of these techniques is building statistical models, which is done through regression models. This technique is considered one of the most important statistical methods for studying the relationship between a dependent variable, also called (the response variable) and the other variables, called covariate variables. This research describes the estimation of the partial linear regression model, as well as the estimation of the “missing at random” values (MAR). Regarding the
... Show MoreThe physical and elastic characteristics of rocks determine rock strengths in general. Rock strength is frequently assessed using porosity well logs such as neutron and sonic logs. The essential criteria for estimating rock mechanic parameters in petroleum engineering research are uniaxial compressive strength and elastic modulus. Indirect estimation using well-log data is necessary to measure these variables. This study attempts to create a single regression model that can accurately forecast rock mechanic characteristics for the Harth Carbonate Formation in the Fauqi oil field. According to the findings of this study, petrophysical parameters are reliable indexes for determining rock mechanical properties having good performance p
... Show MoreIn this study the Entrance Surface Dose (ESD) received by pediatrics patients undergoing chest, abdomen and skull X-ray examinations was estimated. The study was conducted in two hospitals in Najaf city where three radiographic systems were considered. The study participants were classified into four age groups 0-1 , 1-5 , 5-10 and 10-15 years. Calculations were performed using exposure factors, kVp, mAs and focal-skin distance, together with patient data age. The ESD was calculated for the involved patients who underwent an Antero-posterior (AP) chest, abdomen and skull X-ray examination. The resulted data were analyzed and compared with international dose references. For all studied radiographic examinations and all X-ray mac
... Show MoreA reliability system of the multi-component stress-strength model R(s,k) will be considered in the present paper ,when the stress and strength are independent and non-identically distribution have the Exponentiated Family Distribution(FED) with the unknown shape parameter α and known scale parameter λ equal to two and parameter θ equal to three. Different estimation methods of R(s,k) were introduced corresponding to Maximum likelihood and Shrinkage estimators. Comparisons among the suggested estimators were prepared depending on simulation established on mean squared error (MSE) criteria.
Background: Type 2 diabetes mellitus (T2DM) is considered a global disease as it affects over 150 million people worldwide, a number that is supposed to be doubled by 2025. High glucose levels, in vitro, appear to raise the extent of LDL oxidation, and glycated LDL is more prone to oxidative modification.Objective: To investigate the relationship between serum level of vitamin E and lipid profile in patients with type II DM.Methods: This study involved 28 patients suffering from type II DM diagnosed 1-4 years ago and with age ranged from 17 -60 years old, with different residence around Basra ; In addition to 56 apparently healthy persons matched in age and sex to the patients as a control group. The medical histories were taken and Gene
... Show MoreThis study discussed a biased estimator of the Negative Binomial Regression model known as (Liu Estimator), This estimate was used to reduce variance and overcome the problem Multicollinearity between explanatory variables, Some estimates were used such as Ridge Regression and Maximum Likelihood Estimators, This research aims at the theoretical comparisons between the new estimator (Liu Estimator) and the estimators