Regression testing being expensive, requires optimization notion. Typically, the optimization of test cases results in selecting a reduced set or subset of test cases or prioritizing the test cases to detect potential faults at an earlier phase. Many former studies revealed the heuristic-dependent mechanism to attain optimality while reducing or prioritizing test cases. Nevertheless, those studies were deprived of systematic procedures to manage tied test cases issue. Moreover, evolutionary algorithms such as the genetic process often help in depleting test cases, together with a concurrent decrease in computational runtime. However, when examining the fault detection capacity along with other parameters, is required, the method falls short. The current research is motivated by this concept and proposes a multifactor algorithm incorporated with genetic operators and powerful features. A factor-based prioritizer is introduced for proper handling of tied test cases that emerged while implementing re-ordering. Besides this, a Cost-based Fine Tuner (CFT) is embedded in the study to reveal the stable test cases for processing. The effectiveness of the outcome procured through the proposed minimization approach is anatomized and compared with a specific heuristic method (rule-based) and standard genetic methodology. Intra-validation for the result achieved from the reduction procedure is performed graphically. This study contrasts randomly generated sequences with procured re-ordered test sequence for over '10' benchmark codes for the proposed prioritization scheme. Experimental analysis divulged that the proposed system significantly managed to achieve a reduction of 35-40% in testing effort by identifying and executing stable and coverage efficacious test cases at an earlier phase.
This research aims to study the methods of reduction of dimensions that overcome the problem curse of dimensionality when traditional methods fail to provide a good estimation of the parameters So this problem must be dealt with directly . Two methods were used to solve the problem of high dimensional data, The first method is the non-classical method Slice inverse regression ( SIR ) method and the proposed weight standard Sir (WSIR) method and principal components (PCA) which is the general method used in reducing dimensions, (SIR ) and (PCA) is based on the work of linear combinations of a subset of the original explanatory variables, which may suffer from the problem of heterogeneity and the problem of linear
... Show MoreThis study includes the preparation of the ferrite nanoparticles CuxCe0.3-XNi0.7Fe2O4 (where: x = 0, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3) using the sol-gel (auto combustion) method, and citric acid was used as a fuel for combustion. The results of the tests conducted by X-ray diffraction (XRD), emitting-field scanning electron microscopy (FE-SEM), energy-dispersive X-ray analyzer (EDX), and Vibration Sample Magnetic Device (VSM) showed that the compound has a face-centered cubic structure, and the lattice constant is increased with increasing Cu ion. On the other hand, the compound has apparent porosity and spherical particles, and t
... Show MoreIn this research, we use fuzzy nonparametric methods based on some smoothing techniques, were applied to real data on the Iraqi stock market especially the data about Baghdad company for soft drinks for the year (2016) for the period (1/1/2016-31/12/2016) .A sample of (148) observations was obtained in order to construct a model of the relationship between the stock prices (Low, high, modal) and the traded value by comparing the results of the criterion (G.O.F.) for three techniques , we note that the lowest value for this criterion was for the K-Nearest Neighbor at Gaussian function .
In this research weights, which are used, are estimated using General Least Square Estimation to estimate simple linear regression parameters when the depended variable, which is used, consists of two classes attributes variable (for Heteroscedastic problem) depending on Sequential Bayesian Approach instead of the Classical approach used before, Bayes approach provides the mechanism of tackling observations one by one in a sequential way, i .e each new observation will add a new piece of information for estimating the parameter of probability estimation of certain phenomenon of Bernoulli trials who research the depended variable in simple regression linear equation. in addition to the information deduced from the past exper
... Show MoreThis research aims to review the importance of estimating the nonparametric regression function using so-called Canonical Kernel which depends on re-scale the smoothing parameter, which has a large and important role in Kernel and give the sound amount of smoothing .
We has been shown the importance of this method through the application of these concepts on real data refer to international exchange rates to the U.S. dollar against the Japanese yen for the period from January 2007 to March 2010. The results demonstrated preference the nonparametric estimator with Gaussian on the other nonparametric and parametric regression estima
... Show MoreThis study aims to derive a general relation between line loads that acting on two-way slab system and the equivalent uniformly distributed loads. This relation will be so useful to structural designer that are used to working with a uniformly distributed load and enable them to use the traditional methods for analysis of two-way systems (e.g. Direct Design Method). Two types of slab systems, Slab System with Beams and Flat Slab Systems, have been considered in this study to include the effect of aspect ratio and type of slab on the proposed relation. Five aspect ratios, l2/l1 of 0.5, 0.75, 1.0, 1.5 and 2.0, have been considered for both types of two-way systems.
All necessary finite element analyses have been executed with SAFE Soft
In this paper, we used four classification methods to classify objects and compareamong these methods, these are K Nearest Neighbor's (KNN), Stochastic Gradient Descentlearning (SGD), Logistic Regression Algorithm(LR), and Multi-Layer Perceptron (MLP). Weused MCOCO dataset for classification and detection the objects, these dataset image wererandomly divided into training and testing datasets at a ratio of 7:3, respectively. In randomlyselect training and testing dataset images, converted the color images to the gray level, thenenhancement these gray images using the histogram equalization method, resize (20 x 20) fordataset image. Principal component analysis (PCA) was used for feature extraction, andfinally apply four classification metho
... Show MoreIn this paper, the reliability of the stress-strength model is derived for probability P(Y<X) of a component having its strength X exposed to one independent stress Y, when X and Y are following Gompertz Fréchet distribution with unknown shape parameters and known parameters . Different methods were used to estimate reliability R and Gompertz Fréchet distribution parameters, which are maximum likelihood, least square, weighted least square, regression, and ranked set sampling. Also, a comparison of these estimators was made by a simulation study based on mean square error (MSE) criteria. The comparison confirms that the performance of the maximum likelihood estimator is better than that of the other estimators.
يناقش هذا البحث مشكلة التعدد الخطي شبه التام في انموذج الانحدار اللاخطي ( انموذج الانحدار اللوجستي المتعدد) ، عندما يكون المتغير المعتمد متغير نوعيا يمثل ثنائي الاستجابة اما ان يساوي واحد لحدوث استجابة او صفر لعدم حدوث استجابة ، من خلال استعمال مقدرات المركبات الرئيسية التكرارية(IPCE) التي تعتمد على الاوزان الاعتيادية والاوزان البيزية الشرطية .
اذ تم تطبيق مقدرات هذا ا
... Show MoreIn this paper, we introduce and study a new concept named couniform modules, which is a dual notion of uniform modules, where an R-module M is said to be couniform if every proper submodule N of M is either zero or there exists a proper submodule N1 of N such that is small submodule of Also many relationships are given between this class of modules and other related classes of modules. Finally, we consider the hereditary property between R-module M and R-module R in case M is couniform.