Krawtchouk polynomials (KPs) and their moments are promising techniques for applications of information theory, coding theory, and signal processing. This is due to the special capabilities of KPs in feature extraction and classification processes. The main challenge in existing KPs recurrence algorithms is that of numerical errors, which occur during the computation of the coefficients in large polynomial sizes, particularly when the KP parameter (p) values deviate away from 0.5 to 0 and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefficients of KPs in high orders. In particular, this paper discusses the development of a new algorithm and presents a new mathematical model for computing the initial value of the KP parameter. In addition, a new diagonal recurrence relation is introduced and used in the proposed algorithm. The diagonal recurrence algorithm was derived from the existing n direction and x direction recurrence algorithms. The diagonal and existing recurrence algorithms were subsequently exploited to compute the KP coefficients. First, the KP coefficients were computed for one partition after dividing the KP plane into four. To compute the KP coefficients in the other partitions, the symmetry relations were exploited. The performance evaluation of the proposed recurrence algorithm was determined through different comparisons which were carried out in state-of-the-art works in terms of reconstruction error, polynomial size, and computation cost. The obtained results indicate that the proposed algorithm is reliable and computes lesser coefficients when compared to the existing algorithms across wide ranges of parameter values of p and polynomial sizes N. The results also show that the improvement ratio of the computed coefficients ranges from 18.64% to 81.55% in comparison to the existing algorithms. Besides this, the proposed algorithm can generate polynomials of an order ∼8.5 times larger than those generated using state-of-the-art algorithms.
The goal of this experimental study is to determine the effects of different parameters (Flow rate, cuttings density, cuttings size, and hole inclination degree) on hole cleaning efficiency. Freshwater was used as a drilling fluid in this experiment. The experiments were conducted by using flow loop consist of approximately 14 m (46 ft) long with transparent glass test section of 3m (9.84 ft.) long with 4 inches (101.6 mm) ID, the inner metal drill pipe with 2 inches (50.8 mm) OD settled with eccentric position positive 0.5. The results obtained from this study show that the hole cleanings efficiency become better with high flow rate (21 m3/hr) and it increase as the hole inclination angles increased from 60 to 90 degree due to dominated
... Show MoreThe plethora of the emerged radio frequency applications makes the frequency spectrum crowded by many applications and hence the ability to detect specific application’s frequency without distortion is a difficult task to achieve.
The goal is to achieve a method to mitigate the highest interferer power in the frequency spectrum in order to eliminate the distortion.
This paper presents the application of the proposed tunable 6th-order notch filter on Ultra-Wideband (UWB) Complementary Metal-Oxide-Semiconductor (CMOS) Low Noise
Abstract
This study aimed to identify the business risks using the approach of the client strategy analysis in order to improve the efficiency and effectiveness of the audit process. A study of business risks and their impact on the efficiency and effectiveness of the audit process has been performed to establish a cognitive framework of the main objective of this study, in which the descriptive analytical method has been adopted. A survey questionnaire has been developed and distributed to the targeted group of audit firms which have profession license from the Auditors Association in the Gaza Strip (63 offices). A hundred questionnaires have been distributed to the study sample of which, a total of 84 where answered and
... Show MoreToday’s academics have a major hurdle in solving combinatorial problems in the actual world. It is nevertheless possible to use optimization techniques to find, design, and solve a genuine optimal solution to a particular problem, despite the limitations of the applied approach. A surge in interest in population-based optimization methodologies has spawned a plethora of new and improved approaches to a wide range of engineering problems. Optimizing test suites is a combinatorial testing challenge that has been demonstrated to be an extremely difficult combinatorial optimization limitation of the research. The authors have proposed an almost infallible method for selecting combinatorial test cases. It uses a hybrid whale–gray wol
... Show MoreArtificial fish swarm algorithm (AFSA) is one of the critical swarm intelligent algorithms. In this
paper, the authors decide to enhance AFSA via diversity operators (AFSA-DO). The diversity operators will
be producing more diverse solutions for AFSA to obtain reasonable resolutions. AFSA-DO has been used to
solve flexible job shop scheduling problems (FJSSP). However, the FJSSP is a significant problem in the
domain of optimization and operation research. Several research papers dealt with methods of solving this
issue, including forms of intelligence of the swarms. In this paper, a set of FJSSP target samples are tested
employing the improved algorithm to confirm its effectiveness and evaluate its ex
A genetic algorithm model coupled with artificial neural network model was developed to find the optimal values of upstream, downstream cutoff lengths, length of floor and length of downstream protection required for a hydraulic structure. These were obtained for a given maximum difference head, depth of impervious layer and degree of anisotropy. The objective function to be minimized was the cost function with relative cost coefficients for the different dimensions obtained. Constraints used were those that satisfy a factor of safety of 2 against uplift pressure failure and 3 against piping failure.
Different cases reaching 1200 were modeled and analyzed using geo-studio modeling, with different values of input variables. The soil wa
The study using Nonparametric methods for roubust to estimate a location and scatter it is depending minimum covariance determinant of multivariate regression model , due to the presence of outliear values and increase the sample size and presence of more than after the model regression multivariate therefore be difficult to find a median location .
It has been the use of genetic algorithm Fast – MCD – Nested Extension and compared with neural Network Back Propagation of multilayer in terms of accuracy of the results and speed in finding median location ,while the best sample to be determined by relying on less distance (Mahalanobis distance)has the stu
... Show MoreTraffic management at road intersections is a complex requirement that has been an important topic of research and discussion. Solutions have been primarily focused on using vehicular ad hoc networks (VANETs). Key issues in VANETs are high mobility, restriction of road setup, frequent topology variations, failed network links, and timely communication of data, which make the routing of packets to a particular destination problematic. To address these issues, a new dependable routing algorithm is proposed, which utilizes a wireless communication system between vehicles in urban vehicular networks. This routing is position-based, known as the maximum distance on-demand routing algorithm (MDORA). It aims to find an optimal route on a hop-by-ho
... Show MoreIn this research, the focus was on estimating the parameters on (min- Gumbel distribution), using the maximum likelihood method and the Bayes method. The genetic algorithmmethod was employed in estimating the parameters of the maximum likelihood method as well as the Bayes method. The comparison was made using the mean error squares (MSE), where the best estimator is the one who has the least mean squared error. It was noted that the best estimator was (BLG_GE).