Regression testing being expensive, requires optimization notion. Typically, the optimization of test cases results in selecting a reduced set or subset of test cases or prioritizing the test cases to detect potential faults at an earlier phase. Many former studies revealed the heuristic-dependent mechanism to attain optimality while reducing or prioritizing test cases. Nevertheless, those studies were deprived of systematic procedures to manage tied test cases issue. Moreover, evolutionary algorithms such as the genetic process often help in depleting test cases, together with a concurrent decrease in computational runtime. However, when examining the fault detection capacity along with other parameters, is required, the method falls short. The current research is motivated by this concept and proposes a multifactor algorithm incorporated with genetic operators and powerful features. A factor-based prioritizer is introduced for proper handling of tied test cases that emerged while implementing re-ordering. Besides this, a Cost-based Fine Tuner (CFT) is embedded in the study to reveal the stable test cases for processing. The effectiveness of the outcome procured through the proposed minimization approach is anatomized and compared with a specific heuristic method (rule-based) and standard genetic methodology. Intra-validation for the result achieved from the reduction procedure is performed graphically. This study contrasts randomly generated sequences with procured re-ordered test sequence for over '10' benchmark codes for the proposed prioritization scheme. Experimental analysis divulged that the proposed system significantly managed to achieve a reduction of 35-40% in testing effort by identifying and executing stable and coverage efficacious test cases at an earlier phase.
Seventy five isolates of Saccharomyces cerevisiae were identified, they were isolated from different local sources which included decayed fruits and vegetables, vinegar, fermented pasta, baker yeast and an alcohol factory. Identification of isolates was carried out by cultural microscopical and biochemical tests. Ethanol sensitivity of the isolates showed that the minimal inhibitory concentration of the isolate (Sy18) was 16% and Lethal concentration was 17%. The isolate (Sy18) was most efficient as ethanol producer 9.36% (v/w). The ideal conditions to produce ethanol from Date syrup by yeast isolate, were evaluated, various temperatures, pH, Brix, incubation period and different levels of (NH4)2HP04. Maximum ethanol produced was 10
... Show MoreClustering algorithms have recently gained attention in the related literature since
they can help current intrusion detection systems in several aspects. This paper
proposes genetic algorithm (GA) based clustering, serving to distinguish patterns
incoming from network traffic packets into normal and attack. Two GA based
clustering models for solving intrusion detection problem are introduced. The first
model coined as handles numeric features of the network packet, whereas
the second one coined as concerns all features of the network packet.
Moreover, a new mutation operator directed for binary and symbolic features is
proposed. The basic concept of proposed mutation operator depends on the most
frequent value
Rivest Cipher 4 (RC4) is an efficient stream cipher that is commonly used in internet protocols. However, there are several flaws in the key scheduling algorithm (KSA) of RC4. The contribution of this paper is to overcome some of these weaknesses by proposing a new version of KSA coined as modified KSA . In the initial state of the array is suggested to contain random values instead of the identity permutation. Moreover, the permutation of the array is modified to depend on the key value itself. The proposed performance is assessed in terms of cipher secrecy, randomness test and time under a set of experiments with variable key size and different plaintext size. The results show that the RC4 with improves the randomness and secrecy with
... Show MoreStemming is a pre-processing step in Text mining applications as well as it is very important in most of the Information Retrieval systems. The goal of stemming is to reduce different grammatical forms of a word and sometimes derivationally related forms of a word to a common base (root or stem) form like reducing noun, adjective, verb, adverb etc. to its base form. The stem needs not to be identical to the morphological root of the word; it is usually sufficient that related words map to the same stem, even if this stem is not in itself a valid root. As in other languages; there is a need for an effective stemming algorithm for the indexing and retrieval of Arabic documents while the Arabic stemming algorithms are not widely available.
... Show MoreThis study aimed to IQ test standardization of Marten Lother Johan which used with childreen of the tow stage at primary schools who aged (7) years old in Baghdad (Resafaa and Kharh).
The importance of this study are :
1-The importance of childhood and its role to develop the personality.
2-The importance of this age as the child will exposure to different kind of official teaching .
3-The capability for early detection of special category for early intervention in order to provide the necessary care .
4-Using the current test could consider as a predictive tool to screen the intelligent children .
In order to achieve the study aim, the researcher had followed the
... Show MoreCloud Computing is a mass platform to serve high volume data from multi-devices and numerous technologies. Cloud tenants have a high demand to access their data faster without any disruptions. Therefore, cloud providers are struggling to ensure every individual data is secured and always accessible. Hence, an appropriate replication strategy capable of selecting essential data is required in cloud replication environments as the solution. This paper proposed a Crucial File Selection Strategy (CFSS) to address poor response time in a cloud replication environment. A cloud simulator called CloudSim is used to conduct the necessary experiments, and results are presented to evidence the enhancement on replication performance. The obtained an
... Show MoreSeveral directional wells have been drilled in Majnoon oilfield at wide variation in drilling time due to different drilling parameters applied for each well. This technical paper shows the importance of proper selection of the bit, Mud type, applied weight on Bit (WOB), Revolution per minute (RPM), and flow rate based on the previous wells drilled. Utilizing the data during drilling each section for directional wells that's significantly could improve drilling efficiency presented at a high rate of penetration (ROP). Based on the extensive study of three directional wells of 35 degree inclination (MJ-51, MJ-52, and MJ-54) found that the applied drilling parameters for MJ-54 and the bit type within associated drilling parameters to drill
... Show MoreThis paper proposes two hybrid feature subset selection approaches based on the combination (union or intersection) of both supervised and unsupervised filter approaches before using a wrapper, aiming to obtain low-dimensional features with high accuracy and interpretability and low time consumption. Experiments with the proposed hybrid approaches have been conducted on seven high-dimensional feature datasets. The classifiers adopted are support vector machine (SVM), linear discriminant analysis (LDA), and K-nearest neighbour (KNN). Experimental results have demonstrated the advantages and usefulness of the proposed methods in feature subset selection in high-dimensional space in terms of the number of selected features and time spe
... Show MoreHuman Interactive Proofs (HIPs) are automatic inverse Turing tests, which are intended to differentiate between people and malicious computer programs. The mission of making good HIP system is a challenging issue, since the resultant HIP must be secure against attacks and in the same time it must be practical for humans. Text-based HIPs is one of the most popular HIPs types. It exploits the capability of humans to recite text images more than Optical Character Recognition (OCR), but the current text-based HIPs are not well-matched with rapid development of computer vision techniques, since they are either vey simply passed or very hard to resolve, thus this motivate that
... Show MoreTo ensure that a software/hardware product is of sufficient quality and functionality, it is essential to conduct thorough testing and evaluations of the numerous individual software components that make up the application. Many different approaches exist for testing software, including combinatorial testing and covering arrays. Because of the difficulty of dealing with difficulties like a two-way combinatorial explosion, this brings up yet another problem: time. Using client-server architectures, this research introduces a parallel implementation of the TWGH algorithm. Many studies have been conducted to demonstrate the efficiency of this technique. The findings of this experiment were used to determine the increase in speed and co
... Show More