Sequence covering array (SCA) generation is an active research area in recent years. Unlike the sequence-less covering arrays (CA), the order of sequence varies in the test case generation process. This paper reviews the state-of-the-art of the SCA strategies, earlier works reported that finding a minimal size of a test suite is considered as an NP-Hard problem. In addition, most of the existing strategies for SCA generation have a high order of complexity due to the generation of all combinatorial interactions by adopting one-test-at-a-time fashion. Reducing the complexity by adopting one-parameter- at-a-time for SCA generation is a challenging process. In addition, this reduction facilitates the supporting for a higher strength of coverage. Motivated by such challenge, this paper proposes a novel SCA strategy called Dynamic Event Order (DEO), in which the test case generation is done using one-parameter-at-a-time fashion. The details of the DEO are presented with a step-by-step example to demonstrate the behavior and show the correctness of the proposed strategy. In addition, this paper makes a comparison with existing computational strategies. The practical results demonstrate that the proposed DEO strategy outperforms the existing strategies in term of minimal test size in most cases. Moreover, the significance of the DEO increases as the number of sequences increases and/ or the strength of coverage increases. Furthermore, the proposed DEO strategy succeeds to generate SCAs up to t=7. Finally, the DEO strategy succeeds to find new upper bounds for SCA. In fact, the proposed strategy can act as a research vehicle for variants future implementation.
Variable selection is an essential and necessary task in the statistical modeling field. Several studies have triedto develop and standardize the process of variable selection, but it isdifficultto do so. The first question a researcher needs to ask himself/herself what are the most significant variables that should be used to describe a given dataset’s response. In thispaper, a new method for variable selection using Gibbs sampler techniqueshas beendeveloped.First, the model is defined, and the posterior distributions for all the parameters are derived.The new variable selection methodis tested usingfour simulation datasets. The new approachiscompared with some existingtechniques: Ordinary Least Squared (OLS), Least Absolute Shrinkage
... Show MoreABSTRACT
The research focuses on the key issue concerning the use of the best ways to test the financial stability in the banking sector, considering that financial stability cannot be achieved unless the financial sector in general and the banking sector in particular are able to perform its key role in addressing the economic and social development requirements, under the laws and regulations that control banking sector , as the only way that increases its ability to deal with any risks or negative effects experienced by banks and other financial institutions. The research goal is to evaluate the stability of the banking system in Iraq, through the use of a set of econometrics an
... Show MoreThis study investigates the impact of spatial resolution enhancement on supervised classification accuracy using Landsat 9 satellite imagery, achieved through pan-sharpening techniques leveraging Sentinel-2 data. Various methods were employed to synthesize a panchromatic (PAN) band from Sentinel-2 data, including dimension reduction algorithms and weighted averages based on correlation coefficients and standard deviation. Three pan-sharpening algorithms (Gram-Schmidt, Principal Components Analysis, Nearest Neighbour Diffusion) were employed, and their efficacy was assessed using seven fidelity criteria. Classification tasks were performed utilizing Support Vector Machine and Maximum Likelihood algorithms. Results reveal that specifi
... Show MoreThe research aimed at measuring the compatibility of Big date with the organizational Ambidexterity dimensions of the Asia cell Mobile telecommunications company in Iraq in order to determine the possibility of adoption of Big data Triple as a approach to achieve organizational Ambidexterity.
The study adopted the descriptive analytical approach to collect and analyze the data collected by the questionnaire tool developed on the Likert scale After a comprehensive review of the literature related to the two basic study dimensions, the data has been subjected to many statistical treatments in accordance with res
... Show More.
BP algorithm is the most widely used supervised training algorithms for multi-layered feedforward neural net works. However, BP takes long time to converge and quite sensitive to the initial weights of a network. In this paper, a modified cuckoo search algorithm is used to get the optimal set of initial weights that will be used by BP algorithm. And changing the value of BP learning rate to improve the error convergence. The performance of the proposed hybrid algorithm is compared with the stan dard BP using simple data sets. The simulation result show that the proposed algorithm has improved the BP training in terms of quick convergence of the solution depending on the slope of the error graph.
n this research, several estimators concerning the estimation are introduced. These estimators are closely related to the hazard function by using one of the nonparametric methods namely the kernel function for censored data type with varying bandwidth and kernel boundary. Two types of bandwidth are used: local bandwidth and global bandwidth. Moreover, four types of boundary kernel are used namely: Rectangle, Epanechnikov, Biquadratic and Triquadratic and the proposed function was employed with all kernel functions. Two different simulation techniques are also used for two experiments to compare these estimators. In most of the cases, the results have proved that the local bandwidth is the best for all the types of the kernel boundary func
... Show More