Krawtchouk polynomials (KPs) and their moments are promising techniques for applications of information theory, coding theory, and signal processing. This is due to the special capabilities of KPs in feature extraction and classification processes. The main challenge in existing KPs recurrence algorithms is that of numerical errors, which occur during the computation of the coefficients in large polynomial sizes, particularly when the KP parameter (p) values deviate away from 0.5 to 0 and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefficients of KPs in high orders. In particular, this paper discusses the development of a new algorithm and presents a new mathematical model for computing the initial value of the KP parameter. In addition, a new diagonal recurrence relation is introduced and used in the proposed algorithm. The diagonal recurrence algorithm was derived from the existing n direction and x direction recurrence algorithms. The diagonal and existing recurrence algorithms were subsequently exploited to compute the KP coefficients. First, the KP coefficients were computed for one partition after dividing the KP plane into four. To compute the KP coefficients in the other partitions, the symmetry relations were exploited. The performance evaluation of the proposed recurrence algorithm was determined through different comparisons which were carried out in state-of-the-art works in terms of reconstruction error, polynomial size, and computation cost. The obtained results indicate that the proposed algorithm is reliable and computes lesser coefficients when compared to the existing algorithms across wide ranges of parameter values of p and polynomial sizes N. The results also show that the improvement ratio of the computed coefficients ranges from 18.64% to 81.55% in comparison to the existing algorithms. Besides this, the proposed algorithm can generate polynomials of an order ∼8.5 times larger than those generated using state-of-the-art algorithms.
In order to obtain a mixed model with high significance and accurate alertness, it is necessary to search for the method that performs the task of selecting the most important variables to be included in the model, especially when the data under study suffers from the problem of multicollinearity as well as the problem of high dimensions. The research aims to compare some methods of choosing the explanatory variables and the estimation of the parameters of the regression model, which are Bayesian Ridge Regression (unbiased) and the adaptive Lasso regression model, using simulation. MSE was used to compare the methods.
In the presence of deep submicron noise, providing reliable and energy‐efficient network on‐chip operation is becoming a challenging objective. In this study, the authors propose a hybrid automatic repeat request (HARQ)‐based coding scheme that simultaneously reduces the crosstalk induced bus delay and provides multi‐bit error protection while achieving high‐energy savings. This is achieved by calculating two‐dimensional parities and duplicating all the bits, which provide single error correction and six errors detection. The error correction reduces the performance degradation caused by retransmissions, which when combined with voltage swing reduction, due to its high error detection, high‐energy savings are achieved. The res
... Show MoreThe esterification of oleic acid with 2-ethylhexanol in presence of sulfuric acid as homogeneous catalyst was investigated in this work to produce 2-ethylhexyl oleate (biodiesel) by using semi batch reactive distillation. The effect of reaction temperature (100 to 130°C), 2-ethylhexanol:oleic acid molar ratio (1:1 to 1:3) and catalysts concentration (0.2 to 1wt%) were studied. Higher conversion of 97% was achieved with operating conditions of reaction temperature of 130°C, molar ratio of free fatty acid to alcohol of 1:2 and catalyst concentration of 1wt%. A simulation was adopted from basic principles of the reactive distillation using MATLAB to describe the process. Good agreement was achieved.
A simple, precise, rapid, and accurate reversed – phase high performance liquid chromatographic method has been developed for the determination of guaifenesin in pure from pharmaceutical formulations.andindustrial effluent. Chromatography was carried out on supelco L7 reversed- phase column (25cm × 4.6mm), 5 microns, using a mixture of methanol –acetonitrile-water: (80: 10 :10 v/v/v) as a mobile phase at a flow rate of 1.0 ml.min-1. Detection was performed at 254nm at ambient temperature. The retention time for guaifenesin was found 2.4 minutes. The calibration curve was linear (r= 0.9998) over a concentration range from 0.08 to 0.8mg/ml. Limit of detection (LOD) and limit of quantification ( LOQ) were found 6µg/ml and 18µg/ml res
... Show MoreVisible-light photodetectors constructed Fe2O3 were manufactured effectively concluded chemical precipitation technique, films deposited on glass substrate and Si wafer below diverse dopant (0,2,4,6)% of Cl, enhancement in intensity with X-ray diffraction analysis was showed through favored orientation along the (110) plane, the optical measurement presented direct allowed with reduced band gap energies thru variation doping ratio , current–voltage characteristics Fe2O3 /p-Si heterojunction revealed respectable correcting performance in dark, amplified by way of intensity of incident light, moreover good photodetector properties with enhancement in responsivity occurred at wavelength between 400 nm and 470 nm.
The lossy-FDNR based aclive fil ter has an important property among many design realizations. 'This includes a significant reduction in component count particularly in the number of OP-AMP which consumes power. However the· problem of this type is the large component spreads which affect the fdter performance.
In this paper Genetic Algorithm is applied to minimize the component spread (capacitance and resistance p,read). The minimization of these spreads allow the fil
... Show MoreMost heuristic search method's performances are dependent on parameter choices. These parameter settings govern how new candidate solutions are generated and then applied by the algorithm. They essentially play a key role in determining the quality of the solution obtained and the efficiency of the search. Their fine-tuning techniques are still an on-going research area. Differential Evolution (DE) algorithm is a very powerful optimization method and has become popular in many fields. Based on the prolonged research work on DE, it is now arguably one of the most outstanding stochastic optimization algorithms for real-parameter optimization. One reason for its popularity is its widely appreciated property of having only a small number of par
... Show MoreAssociation rules mining (ARM) is a fundamental and widely used data mining technique to achieve useful information about data. The traditional ARM algorithms are degrading computation efficiency by mining too many association rules which are not appropriate for a given user. Recent research in (ARM) is investigating the use of metaheuristic algorithms which are looking for only a subset of high-quality rules. In this paper, a modified discrete cuckoo search algorithm for association rules mining DCS-ARM is proposed for this purpose. The effectiveness of our algorithm is tested against a set of well-known transactional databases. Results indicate that the proposed algorithm outperforms the existing metaheuristic methods.