Regression testing being expensive, requires optimization notion. Typically, the optimization of test cases results in selecting a reduced set or subset of test cases or prioritizing the test cases to detect potential faults at an earlier phase. Many former studies revealed the heuristic-dependent mechanism to attain optimality while reducing or prioritizing test cases. Nevertheless, those studies were deprived of systematic procedures to manage tied test cases issue. Moreover, evolutionary algorithms such as the genetic process often help in depleting test cases, together with a concurrent decrease in computational runtime. However, when examining the fault detection capacity along with other parameters, is required, the method falls short. The current research is motivated by this concept and proposes a multifactor algorithm incorporated with genetic operators and powerful features. A factor-based prioritizer is introduced for proper handling of tied test cases that emerged while implementing re-ordering. Besides this, a Cost-based Fine Tuner (CFT) is embedded in the study to reveal the stable test cases for processing. The effectiveness of the outcome procured through the proposed minimization approach is anatomized and compared with a specific heuristic method (rule-based) and standard genetic methodology. Intra-validation for the result achieved from the reduction procedure is performed graphically. This study contrasts randomly generated sequences with procured re-ordered test sequence for over '10' benchmark codes for the proposed prioritization scheme. Experimental analysis divulged that the proposed system significantly managed to achieve a reduction of 35-40% in testing effort by identifying and executing stable and coverage efficacious test cases at an earlier phase.
An automatic text summarization system mimics how humans summarize by picking the most significant sentences in a source text. However, the complexities of the Arabic language have become challenging to obtain information quickly and effectively. The main disadvantage of the traditional approaches is that they are strictly constrained (especially for the Arabic language) by the accuracy of sentence feature functions, weighting schemes, and similarity calculations. On the other hand, the meta-heuristic search approaches have a feature tha
... Show MoreThe purchase of a home and access to housing is one of the most important requirements for the life of the individual and the stability of living and the development of the prices of houses in general and in Baghdad in particular affected by several factors, including the basic area of the house, the age of the house, the neighborhood in which the housing is available and the basic services, Where the statistical model SSM model was used to model house prices over a period of time from 2000 to 2018 and forecast until 2025 The research is concerned with enhancing the importance of this model and describing it as a standard and important compared to the models used in the analysis of time series after obtaining the
... Show MoreAbstract—In this study, we present the experimental results of ultra-wideband (UWB) imaging oriented for detecting small malignant breast tumors at an early stage. The technique is based on radar sensing, whereby tissues are differentiated based on the dielectric contrast between the disease and its surrounding healthy tissues. The image reconstruction algorithm referred to herein as the enhanced version of delay and sum (EDAS) algorithm is used to identify the malignant tissue in a cluttered environment and noisy data. The methods and procedures are tested using MRI-derived breast phantoms, and the results are compared with images obtained from classical DAS variant. Incorporating a new filtering technique and multiplication procedure, t
... Show MoreWith the proliferation of both Internet access and data traffic, recent breaches have brought into sharp focus the need for Network Intrusion Detection Systems (NIDS) to protect networks from more complex cyberattacks. To differentiate between normal network processes and possible attacks, Intrusion Detection Systems (IDS) often employ pattern recognition and data mining techniques. Network and host system intrusions, assaults, and policy violations can be automatically detected and classified by an Intrusion Detection System (IDS). Using Python Scikit-Learn the results of this study show that Machine Learning (ML) techniques like Decision Tree (DT), Naïve Bayes (NB), and K-Nearest Neighbor (KNN) can enhance the effectiveness of an Intrusi
... Show MoreA substantial portion of today’s multimedia data exists in the form of unstructured text. However, the unstructured nature of text poses a significant task in meeting users’ information requirements. Text classification (TC) has been extensively employed in text mining to facilitate multimedia data processing. However, accurately categorizing texts becomes challenging due to the increasing presence of non-informative features within the corpus. Several reviews on TC, encompassing various feature selection (FS) approaches to eliminate non-informative features, have been previously published. However, these reviews do not adequately cover the recently explored approaches to TC problem-solving utilizing FS, such as optimization techniques.
... Show MoreThe aim of human lower limb rehabilitation robot is to regain the ability of motion and to strengthen the weak muscles. This paper proposes the design of a force-position control for a four Degree Of Freedom (4-DOF) lower limb wearable rehabilitation robot. This robot consists of a hip, knee and ankle joints to enable the patient for motion and turn in both directions. The joints are actuated by Pneumatic Muscles Actuators (PMAs). The PMAs have very great potential in medical applications because the similarity to biological muscles. Force-Position control incorporating a Takagi-Sugeno-Kang- three- Proportional-Derivative like Fuzzy Logic (TSK-3-PD) Controllers for position control and three-Proportional (3-P) controllers for force contr
... Show More
The great scientific progress has led to widespread Information as information accumulates in large databases is important in trying to revise and compile this vast amount of data and, where its purpose to extract hidden information or classified data under their relations with each other in order to take advantage of them for technical purposes.
And work with data mining (DM) is appropriate in this area because of the importance of research in the (K-Means) algorithm for clustering data in fact applied with effect can be observed in variables by changing the sample size (n) and the number of clusters (K)
... Show More