Regression testing being expensive, requires optimization notion. Typically, the optimization of test cases results in selecting a reduced set or subset of test cases or prioritizing the test cases to detect potential faults at an earlier phase. Many former studies revealed the heuristic-dependent mechanism to attain optimality while reducing or prioritizing test cases. Nevertheless, those studies were deprived of systematic procedures to manage tied test cases issue. Moreover, evolutionary algorithms such as the genetic process often help in depleting test cases, together with a concurrent decrease in computational runtime. However, when examining the fault detection capacity along with other parameters, is required, the method falls short. The current research is motivated by this concept and proposes a multifactor algorithm incorporated with genetic operators and powerful features. A factor-based prioritizer is introduced for proper handling of tied test cases that emerged while implementing re-ordering. Besides this, a Cost-based Fine Tuner (CFT) is embedded in the study to reveal the stable test cases for processing. The effectiveness of the outcome procured through the proposed minimization approach is anatomized and compared with a specific heuristic method (rule-based) and standard genetic methodology. Intra-validation for the result achieved from the reduction procedure is performed graphically. This study contrasts randomly generated sequences with procured re-ordered test sequence for over '10' benchmark codes for the proposed prioritization scheme. Experimental analysis divulged that the proposed system significantly managed to achieve a reduction of 35-40% in testing effort by identifying and executing stable and coverage efficacious test cases at an earlier phase.
In this paper, compared eight methods for generating the initial value and the impact of these methods to estimate the parameter of a autoregressive model, as was the use of three of the most popular methods to estimate the model and the most commonly used by researchers MLL method, Barg method and the least squares method and that using the method of simulation model first order autoregressive through the design of a number of simulation experiments and the different sizes of the samples.
The current paper proposes a new estimator for the linear regression model parameters under Big Data circumstances. From the diversity of Big Data variables comes many challenges that can be interesting to the researchers who try their best to find new and novel methods to estimate the parameters of linear regression model. Data has been collected by Central Statistical Organization IRAQ, and the child labor in Iraq has been chosen as data. Child labor is the most vital phenomena that both society and education are suffering from and it affects the future of our next generation. Two methods have been selected to estimate the parameter
... Show MoreIn this work, Elzaki transform (ET) introduced by Tarig Elzaki is applied to solve linear Volterra fractional integro-differential equations (LVFIDE). The fractional derivative is considered in the Riemman-Liouville sense. The procedure is based on the application of (ET) to (LVFIDE) and using properties of (ET) and its inverse. Finally, some examples are solved to show that this is computationally efficient and accurate.
A Strength Pareto Evolutionary Algorithm 2 (SPEA 2) approach for solving the multi-objective Environmental / Economic Power Dispatch (EEPD) problem is presented in this paper. In the past fuel cost consumption minimization was the aim (a single objective function) of economic power dispatch problem. Since the clean air act amendments have been applied to reduce SO2 and NOX emissions from power plants, the utilities change their strategies in order to reduce pollution and atmospheric emission as well, adding emission minimization as other objective function made economic power dispatch (EPD) a multi-objective problem having conflicting objectives. SPEA2 is the improved version of SPEA with better fitness assignment, density estimation, an
... Show MoreDigital image manipulation has become increasingly prevalent due to the widespread availability of sophisticated image editing tools. In copy-move forgery, a portion of an image is copied and pasted into another area within the same image. The proposed methodology begins with extracting the image's Local Binary Pattern (LBP) algorithm features. Two main statistical functions, Stander Deviation (STD) and Angler Second Moment (ASM), are computed for each LBP feature, capturing additional statistical information about the local textures. Next, a multi-level LBP feature selection is applied to select the most relevant features. This process involves performing LBP computation at multiple scales or levels, capturing textures at different
... Show MoreVarious speech enhancement Algorithms (SEA) have been developed in the last few decades. Each algorithm has its advantages and disadvantages because the speech signal is affected by environmental situations. Distortion of speech results in the loss of important features that make this signal challenging to understand. SEA aims to improve the intelligibility and quality of speech that different types of noise have degraded. In most applications, quality improvement is highly desirable as it can reduce listener fatigue, especially when the listener is exposed to high noise levels for extended periods (e.g., manufacturing). SEA reduces or suppresses the background noise to some degree, sometimes called noise suppression alg
... Show MoreIn this work, Elzaki transform (ET) introduced by Tarig Elzaki is applied to solve linear Volterra fractional integro-differential equations (LVFIDE). The fractional derivative is considered in the Riemman-Liouville sense. The procedure is based on the application of (ET) to (LVFIDE) and using properties of (ET) and its inverse. Finally, some examples are solved to show that this is computationally efficient and accurate.
Data centric techniques, like data aggregation via modified algorithm based on fuzzy clustering algorithm with voronoi diagram which is called modified Voronoi Fuzzy Clustering Algorithm (VFCA) is presented in this paper. In the modified algorithm, the sensed area divided into number of voronoi cells by applying voronoi diagram, these cells are clustered by a fuzzy C-means method (FCM) to reduce the transmission distance. Then an appropriate cluster head (CH) for each cluster is elected. Three parameters are used for this election process, the energy, distance between CH and its neighbor sensors and packet loss values. Furthermore, data aggregation is employed in each CH to reduce the amount of data transmission which le
... Show MoreThere is no doubt though all endemic industrial must be accompanied by environmental contamination problem that is closely linked increased industrial activity on the one hand and increase the size of the problem put this industrial waste on the other, and therefore the risk to natural resources and ecosystems as a result of the presence of development projects (especially industrial ones) so it was There must be a sound footing for the sitting of industrial zones and see how their commitment to the appropriate conditions to preserve the environment and enact strict laws to achieve this and prepared well prepared to avoid the numerous errors at the social, economic and technical, environmental and health.