Krawtchouk polynomials (KPs) and their moments are promising techniques for applications of information theory, coding theory, and signal processing. This is due to the special capabilities of KPs in feature extraction and classification processes. The main challenge in existing KPs recurrence algorithms is that of numerical errors, which occur during the computation of the coefficients in large polynomial sizes, particularly when the KP parameter (p) values deviate away from 0.5 to 0 and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefficients of KPs in high orders. In particular, this paper discusses the development of a new algorithm and presents a new mathematical model for computing the initial value of the KP parameter. In addition, a new diagonal recurrence relation is introduced and used in the proposed algorithm. The diagonal recurrence algorithm was derived from the existing n direction and x direction recurrence algorithms. The diagonal and existing recurrence algorithms were subsequently exploited to compute the KP coefficients. First, the KP coefficients were computed for one partition after dividing the KP plane into four. To compute the KP coefficients in the other partitions, the symmetry relations were exploited. The performance evaluation of the proposed recurrence algorithm was determined through different comparisons which were carried out in state-of-the-art works in terms of reconstruction error, polynomial size, and computation cost. The obtained results indicate that the proposed algorithm is reliable and computes lesser coefficients when compared to the existing algorithms across wide ranges of parameter values of p and polynomial sizes N. The results also show that the improvement ratio of the computed coefficients ranges from 18.64% to 81.55% in comparison to the existing algorithms. Besides this, the proposed algorithm can generate polynomials of an order ∼8.5 times larger than those generated using state-of-the-art algorithms.
The plethora of the emerged radio frequency applications makes the frequency spectrum crowded by many applications and hence the ability to detect specific application’s frequency without distortion is a difficult task to achieve.
The goal is to achieve a method to mitigate the highest interferer power in the frequency spectrum in order to eliminate the distortion.
This paper presents the application of the proposed tunable 6th-order notch filter on Ultra-Wideband (UWB) Complementary Metal-Oxide-Semiconductor (CMOS) Low Noise
Abstract
This study aimed to identify the business risks using the approach of the client strategy analysis in order to improve the efficiency and effectiveness of the audit process. A study of business risks and their impact on the efficiency and effectiveness of the audit process has been performed to establish a cognitive framework of the main objective of this study, in which the descriptive analytical method has been adopted. A survey questionnaire has been developed and distributed to the targeted group of audit firms which have profession license from the Auditors Association in the Gaza Strip (63 offices). A hundred questionnaires have been distributed to the study sample of which, a total of 84 where answered and
... Show MoreIn this study, genetic algorithm was used to predict the reaction kinetics of Iraqi heavy naphtha catalytic reforming process located in Al-Doura refinery in Baghdad. One-dimensional steady state model was derived to describe commercial catalytic reforming unit consisting of four catalytic reforming reactors in series process.
The experimental information (Reformate composition and output temperature) for each four reactors collected at different operating conditions was used to predict the parameters of the proposed kinetic model. The kinetic model involving 24 components, 1 to 11 carbon atoms for paraffins and 6 to 11 carbon atom for naphthenes and aromatics with 71 reactions. The pre-exponential Arrhenius constants and a
... Show MoreIn this research, the focus was on estimating the parameters on (min- Gumbel distribution), using the maximum likelihood method and the Bayes method. The genetic algorithmmethod was employed in estimating the parameters of the maximum likelihood method as well as the Bayes method. The comparison was made using the mean error squares (MSE), where the best estimator is the one who has the least mean squared error. It was noted that the best estimator was (BLG_GE).
Today’s academics have a major hurdle in solving combinatorial problems in the actual world. It is nevertheless possible to use optimization techniques to find, design, and solve a genuine optimal solution to a particular problem, despite the limitations of the applied approach. A surge in interest in population-based optimization methodologies has spawned a plethora of new and improved approaches to a wide range of engineering problems. Optimizing test suites is a combinatorial testing challenge that has been demonstrated to be an extremely difficult combinatorial optimization limitation of the research. The authors have proposed an almost infallible method for selecting combinatorial test cases. It uses a hybrid whale–gray wol
... Show MoreIn the lifetime process in some systems, most data cannot belong to one single population. In fact, it can represent several subpopulations. In such a case, the known distribution cannot be used to model data. Instead, a mixture of distribution is used to modulate the data and classify them into several subgroups. The mixture of Rayleigh distribution is best to be used with the lifetime process. This paper aims to infer model parameters by the expectation-maximization (EM) algorithm through the maximum likelihood function. The technique is applied to simulated data by following several scenarios. The accuracy of estimation has been examined by the average mean square error (AMSE) and the average classification success rate (ACSR). T
... Show More<p>In combinatorial testing development, the fabrication of covering arrays is the key challenge by the multiple aspects that influence it. A wide range of combinatorial problems can be solved using metaheuristic and greedy techniques. Combining the greedy technique utilizing a metaheuristic search technique like hill climbing (HC), can produce feasible results for combinatorial tests. Methods based on metaheuristics are used to deal with tuples that may be left after redundancy using greedy strategies; then the result utilization is assured to be near-optimal using a metaheuristic algorithm. As a result, the use of both greedy and HC algorithms in a single test generation system is a good candidate if constructed correctly. T
... Show MoreRA Ali, LK Abood, Int J Sci Res, 2017 - Cited by 2