This paper presents a hybrid genetic algorithm (hGA) for optimizing the maximum likelihood function ln(L(phi(1),theta(1)))of the mixed model ARMA(1,1). The presented hybrid genetic algorithm (hGA) couples two processes: the canonical genetic algorithm (cGA) composed of three main steps: selection, local recombination and mutation, with the local search algorithm represent by steepest descent algorithm (sDA) which is defined by three basic parameters: frequency, probability, and number of local search iterations. The experimental design is based on simulating the cGA, hGA, and sDA algorithms with different values of model parameters, and sample size(n). The study contains comparison among these algorithms depending on MSE value. One can conclude that (hGA) can give good estimators (phi(1),theta(1)) of ARMA(1,1)parameters and more reliable than estimators obtained by cGA and SDA algorithm
Krawtchouk polynomials (KPs) and their moments are promising techniques for applications of information theory, coding theory, and signal processing. This is due to the special capabilities of KPs in feature extraction and classification processes. The main challenge in existing KPs recurrence algorithms is that of numerical errors, which occur during the computation of the coefficients in large polynomial sizes, particularly when the KP parameter (p) values deviate away from 0.5 to 0 and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefficients of KPs in high orders. In particular, this paper discusses the development of a new algorithm and presents a new mathematical model for computing the
... Show MoreIn this research a new system identification algorithm is presented for obtaining an optimal set of mathematical models for system with perturbed coefficients, then this algorithm is applied practically by an “On Line System Identification Circuit”, based on real time speed response data of a permanent magnet DC motor. Such set of mathematical models represents the physical plant against all variation which may exist in its parameters, and forms a strong mathematical foundation for stability and performance analysis in control theory problems.
The cost of pile foundations is part of the super structure cost, and it became necessary to reduce this cost by studying the pile types then decision-making in the selection of the optimal pile type in terms of cost and time of production and quality .So The main objective of this study is to solve the time–cost–quality trade-off (TCQT) problem by finding an optimal pile type with the target of "minimizing" cost and time while "maximizing" quality. There are many types In the world of piles but in this paper, the researcher proposed five pile types, one of them is not a traditional, and developed a model for the problem and then employed particle swarm optimization (PSO) algorithm, as one of evolutionary algorithms with t
... Show MoreThe convergence speed is the most important feature of Back-Propagation (BP) algorithm. A lot of improvements were proposed to this algorithm since its presentation, in order to speed up the convergence phase. In this paper, a new modified BP algorithm called Speeding up Back-Propagation Learning (SUBPL) algorithm is proposed and compared to the standard BP. Different data sets were implemented and experimented to verify the improvement in SUBPL.
In this paper, we derive and prove the stability bounds of the momentum coefficient µ and the learning rate ? of the back propagation updating rule in Artificial Neural Networks .The theoretical upper bound of learning rate ? is derived and its practical approximation is obtained
Huge number of medical images are generated and needs for more storage capacity and bandwidth for transferring over the networks. Hybrid DWT-DCT compression algorithm is applied to compress the medical images by exploiting the features of both techniques. Discrete Wavelet Transform (DWT) coding is applied to image YCbCr color model which decompose image bands into four subbands (LL, HL, LH and HH). The LL subband is transformed into low and high frequency components using Discrete Cosine Transform (DCT) to be quantize by scalar quantization that was applied on all image bands, the quantization parameters where reduced by half for the luminance band while it is the same for the chrominance bands to preserve the image quality, the zig
... Show MoreBP algorithm is the most widely used supervised training algorithms for multi-layered feedforward neural net works. However, BP takes long time to converge and quite sensitive to the initial weights of a network. In this paper, a modified cuckoo search algorithm is used to get the optimal set of initial weights that will be used by BP algorithm. And changing the value of BP learning rate to improve the error convergence. The performance of the proposed hybrid algorithm is compared with the stan dard BP using simple data sets. The simulation result show that the proposed algorithm has improved the BP training in terms of quick convergence of the solution depending on the slope of the error graph.
Abstract: The utility of DNA sequencing in diagnosing and prognosis of diseases is vital for assessing the risk of genetic disorders, particularly for asymptomatic individuals with a genetic predisposition. Such diagnostic approaches are integral in guiding health and lifestyle decisions and preparing families with the necessary foreknowledge to anticipate potential genetic abnormalities. The present study explores implementing a define-by-run deep learning (DL) model optimized using the Tree-structured Parzen estimator algorithm to enhance the precision of genetic diagnostic tools. Unlike conventional models, the define-by-run model bolsters accuracy through dynamic adaptation to data during the learning process and iterative optimization
... Show MoreIn this study, we made a comparison between LASSO & SCAD methods, which are two special methods for dealing with models in partial quantile regression. (Nadaraya & Watson Kernel) was used to estimate the non-parametric part ;in addition, the rule of thumb method was used to estimate the smoothing bandwidth (h). Penalty methods proved to be efficient in estimating the regression coefficients, but the SCAD method according to the mean squared error criterion (MSE) was the best after estimating the missing data using the mean imputation method