In many areas, such as simulation, numerical analysis, computer programming, decision-making, entertainment, and coding, a random number input is required. The pseudo-random number uses its seed value. In this paper, a hybrid method for pseudo number generation is proposed using Linear Feedback Shift Registers (LFSR) and Linear Congruential Generator (LCG). The hybrid method for generating keys is proposed by merging technologies. In each method, a new large in key-space group of numbers were generated separately. Also, a higher level of secrecy is gained such that the internal numbers generated from LFSR are combined with LCG (The adoption of roots in non-linear iteration loops). LCG and LFSR are linear structures and outputs of these Random Number Generators (RNGs) are predictable, while the proposal avoids this predictable nature. The results were tested in terms of randomness, in terms of the correlation between the keys and the effect of changing the initial state on the generated keys and the results of the tests showed that they had successfully passed the tests and resist brute force and differential attack.
Krawtchouk polynomials (KPs) and their moments are promising techniques for applications of information theory, coding theory, and signal processing. This is due to the special capabilities of KPs in feature extraction and classification processes. The main challenge in existing KPs recurrence algorithms is that of numerical errors, which occur during the computation of the coefficients in large polynomial sizes, particularly when the KP parameter (p) values deviate away from 0.5 to 0 and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefficients of KPs in high orders. In particular, this paper discusses the development of a new algorithm and presents a new mathematical model for computing the
... Show MoreThis paper proposes a novel meta-heuristic optimization algorithm called the fine-tuning meta-heuristic algorithm (FTMA) for solving global optimization problems. In this algorithm, the solutions are fine-tuned using the fundamental steps in meta-heuristic optimization, namely, exploration, exploitation, and randomization, in such a way that if one step improves the solution, then it is unnecessary to execute the remaining steps. The performance of the proposed FTMA has been compared with that of five other optimization algorithms over ten benchmark test functions. Nine of them are well-known and already exist in the literature, while the tenth one is proposed by the authors and introduced in this article. One test trial was shown t
... Show MoreIn many video and image processing applications, the frames are partitioned into blocks, which are extracted and processed sequentially. In this paper, we propose a fast algorithm for calculation of features of overlapping image blocks. We assume the features are projections of the block on separable 2D basis functions (usually orthogonal polynomials) where we benefit from the symmetry with respect to spatial variables. The main idea is based on a construction of auxiliary matrices that virtually extends the original image and makes it possible to avoid a time-consuming computation in loops. These matrices can be pre-calculated, stored and used repeatedly since they are independent of the image itself. We validated experimentally th
... Show MoreIn this research a new system identification algorithm is presented for obtaining an optimal set of mathematical models for system with perturbed coefficients, then this algorithm is applied practically by an “On Line System Identification Circuit”, based on real time speed response data of a permanent magnet DC motor. Such set of mathematical models represents the physical plant against all variation which may exist in its parameters, and forms a strong mathematical foundation for stability and performance analysis in control theory problems.
For several applications, it is very important to have an edge detection technique matching human visual contour perception and less sensitive to noise. The edge detection algorithm describes in this paper based on the results obtained by Maximum a posteriori (MAP) and Maximum Entropy (ME) deblurring algorithms. The technique makes a trade-off between sharpening and smoothing the noisy image. One of the advantages of the described algorithm is less sensitive to noise than that given by Marr and Geuen techniques that considered to be the best edge detection algorithms in terms of matching human visual contour perception.
Ericson’s formula describes the partial level density (PLD) of pre-equilibrium reactions and corrections. PLD with pairing correction can be calculated using four methods, namely, pairing, improved pairing, exact Pauli and back shift energy corrections. The variations in the PLD values of each of the four formulas of strontium (88Sr), Yttrium (89Y) and Zirconium (90Zr) isotones have been examined. Results shows that the PLD values that use pairing and improved pairing corrections do not vary for different isotones. However, a small change in PLD values is observed when exact Pauli correction and back shift energy were utilised. The change in the PLD values using back shift energy correction is bigg
... Show MoreThis research basically gives an introduction about the multiple intelligence
theory and its implication into the classroom. It presents a unit plan based upon the
MI theory followed by a report which explains the application of the plan by the
researcher on the first class student of computer department in college of sciences/
University of Al-Mustansiryia and the teacher's and the students' reaction to it.
The research starts with a short introduction about the MI theory is a great
theory that could help students to learn better in a relaxed learning situation. It is
presented by Howard Gardener first when he published his book "Frames of
Minds" in 1983 in which he describes how the brain has multiple intelligen
In this paper, Bayes estimators of the parameter of Maxwell distribution have been derived along with maximum likelihood estimator. The non-informative priors; Jeffreys and the extension of Jeffreys prior information has been considered under two different loss functions, the squared error loss function and the modified squared error loss function for comparison purpose. A simulation study has been developed in order to gain an insight into the performance on small, moderate and large samples. The performance of these estimators has been explored numerically under different conditions. The efficiency for the estimators was compared according to the mean square error MSE. The results of comparison by MSE show that the efficiency of B
... Show MoreHartha Formation is an overburdened horizon in the X-oilfield which generates a lot of Non-Productive Time (NPT) associated with drilling mud losses. This study has been conducted to investigate the loss events in this formation as well as to provide geological interpretations based on datasets from nine wells in this field of interest. The interpretation was based on different analyses including wireline logs, cuttings descriptions, image logs, and analog data. Seismic and coherency data were also used to formulate the geological interpretations and calibrate that with the loss events of the Hartha Fm.
The results revealed that the upper part of the Hartha Fm. was identified as an interval capable of creating potentia
... Show More