Sequence covering array (SCA) generation is an active research area in recent years. Unlike the sequence-less covering arrays (CA), the order of sequence varies in the test case generation process. This paper reviews the state-of-the-art of the SCA strategies, earlier works reported that finding a minimal size of a test suite is considered as an NP-Hard problem. In addition, most of the existing strategies for SCA generation have a high order of complexity due to the generation of all combinatorial interactions by adopting one-test-at-a-time fashion. Reducing the complexity by adopting one-parameter- at-a-time for SCA generation is a challenging process. In addition, this reduction facilitates the supporting for a higher strength of coverage. Motivated by such challenge, this paper proposes a novel SCA strategy called Dynamic Event Order (DEO), in which the test case generation is done using one-parameter-at-a-time fashion. The details of the DEO are presented with a step-by-step example to demonstrate the behavior and show the correctness of the proposed strategy. In addition, this paper makes a comparison with existing computational strategies. The practical results demonstrate that the proposed DEO strategy outperforms the existing strategies in term of minimal test size in most cases. Moreover, the significance of the DEO increases as the number of sequences increases and/ or the strength of coverage increases. Furthermore, the proposed DEO strategy succeeds to generate SCAs up to t=7. Finally, the DEO strategy succeeds to find new upper bounds for SCA. In fact, the proposed strategy can act as a research vehicle for variants future implementation.
Municipal solid waste generation in Babylon Governorate is often affected by changes in lifestyles, population growth, social and cultural habits and improved economic conditions. This effect will make it difficult to plan and draw up future plans for solid waste management.In this study, municipal solid waste was divided into residential and commercial solid wastes. Residential solid wastes were represented by household wastes, while commercial solid wastes included commercial, institutional and municipal services wastes.For residential solid wastes, the relational stratified random sampling was implemented, that is the total population should be divided into clusters (socio-income level), a random sample was taken in e
... Show MoreThis paper aims at the fact that most organizations today suffer from a waste of time, effort, and cost, and they have difficulty in achieving the best performance situations and compete strongly. The researcher distributed 108 questionnaires as a statistical analyzable sample society where the sample intentionally consists of general managers, department head, and division head. The questionnaire was formulated according to the Likert scale. The use of personal interviews and observations are additional tools for data collection and a number of statistical methods is used for data analysis such as simple regression and correlation coefficient (Pearson). One of the most prominent conclusions is that the company has adequate and c
... Show MoreA two time step stochastic multi-variables multi-sites hydrological data forecasting model was developed and verified using a case study. The philosophy of this model is to use the cross-variables correlations, cross-sites correlations and the two steps time lag correlations simultaneously, for estimating the parameters of the model which then are modified using the mutation process of the genetic algorithm optimization model. The objective function that to be minimized is the Akiake test value. The case study is of four variables and three sites. The variables are the monthly air temperature, humidity, precipitation, and evaporation; the sites are Sulaimania, Chwarta, and Penjwin, which are located north Iraq. The model performance was
... Show MoreWithin the framework of big data, energy issues are highly significant. Despite the significance of energy, theoretical studies focusing primarily on the issue of energy within big data analytics in relation to computational intelligent algorithms are scarce. The purpose of this study is to explore the theoretical aspects of energy issues in big data analytics in relation to computational intelligent algorithms since this is critical in exploring the emperica aspects of big data. In this chapter, we present a theoretical study of energy issues related to applications of computational intelligent algorithms in big data analytics. This work highlights that big data analytics using computational intelligent algorithms generates a very high amo
... Show MoreViolence occurs as a daily human action all over the world; it may cause so many kinds of damage to individuals as well as to society: physical, psychological, or both. Many literary authors of different genres have tried their best to portray violence by showing its negative effects, especially playwrights because they have the chance to show people the dangers of violence through performance on stage to warn them against such negatively affected action. It has been a human action since the beginning of human life on this planet when the first crime happened on earth when Cane killed his brother Abel. In our modern world, people are witnessing daily violent actions as a result of destructive wars that turned the humans into brutal beings.
... Show MoreA joke is something that is said, written, or done to cause amusement or laughter. It could be a short piece or a long narrative joke, but either way it ends in a punchline, where the joke contains a second conflicting meaning. Sometimes when we read a joke, we understand it directly and fully, but this is not always the case. When a writer writes a joke, he intends to manipulate the reader in a way that the reader doesn’t get the joke at once. He does that by using pun on words or any other word play. We, as listeners to the joke, try to get the message depending mostly on the tone of the voice, in addition to other factors concerning vocabulary and grammar. But as readers of the joke, we need more other factors in order to get
... Show MoreBackground: Prolactin is a hormone, as well as a cytokine which is synthesized and secreted from the anterior pituitary gland and various extra pituitary sites including immune cells under control of a superdistal promoter that contains a single nucleotide polymorphism -1149 G/T. Rheumatoid Arthritis has been associated with increased serum prolactin levels.Objectives: To investigate the association of the extra pituitary -1149 G/T promoter polymorphism among Iraqi rheumatoid arthritis patients and prolactin levels.Methods: We tested 73 patients with rheumatoid arthritis and 40 healthy individuals. The DNA samples were genotyped using the Polymerase Chain Reaction-Restriction fragment Length Polymorphism method and the levels of prolacti
... Show MoreIris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the origin
... Show MoreThe process of evaluating data (age and the gender structure) is one of the important factors that help any country to draw plans and programs for the future. Discussed the errors in population data for the census of Iraqi population of 1997. targeted correct and revised to serve the purposes of planning. which will be smoothing the population databy using nonparametric regression estimator (Nadaraya-Watson estimator) This estimator depends on bandwidth (h) which can be calculate it by two ways of using Bayesian method, the first when observations distribution is Lognormal Kernel and the second is when observations distribution is Normal Kernel
... Show More