This paper derives the EDITRK4 technique, which is an exponentially fitted diagonally implicit RK method for solving ODEs . This approach is intended to integrate exactly initial value problems (IVPs), their solutions consist of linear combinations of the group functions and for exponentially fitting problems, with being the problem’s major frequency utilized to improve the precision of the method. The modified method EDITRK4 is a new three-stage fourth-order exponentially-fitted diagonally implicit approach for solving IVPs with functions that are exponential as solutions. Different forms of -order ODEs must be derived using the modified system, and when the same issue is reduced to a framework of equations that can be solved using conventional RK approaches, numerical comparisons must be done. The findings show that the novel approach is more efficacious than previously published methods.
This research aims to analyze and simulate biochemical real test data for uncovering the relationships among the tests, and how each of them impacts others. The data were acquired from Iraqi private biochemical laboratory. However, these data have many dimensions with a high rate of null values, and big patient numbers. Then, several experiments have been applied on these data beginning with unsupervised techniques such as hierarchical clustering, and k-means, but the results were not clear. Then the preprocessing step performed, to make the dataset analyzable by supervised techniques such as Linear Discriminant Analysis (LDA), Classification And Regression Tree (CART), Logistic Regression (LR), K-Nearest Neighbor (K-NN), Naïve Bays (NB
... Show MoreThe Artificial Neural Network methodology is a very important & new subjects that build's the models for Analyzing, Data Evaluation, Forecasting & Controlling without depending on an old model or classic statistic method that describe the behavior of statistic phenomenon, the methodology works by simulating the data to reach a robust optimum model that represent the statistic phenomenon & we can use the model in any time & states, we used the Box-Jenkins (ARMAX) approach for comparing, in this paper depends on the received power to build a robust model for forecasting, analyzing & controlling in the sod power, the received power come from
... Show MoreAbstract:
The great importance that distinguish these factorial experiments made them subject a desirable for use and application in many fields, particularly in the field of agriculture, which is considered the broad area for experimental designs applications.
And the second case for the factorial experiment, which faces researchers have great difficulty in dealing with the case unbalance we mean that frequencies treatments factorial are not equal meaning (that is allocated a number unequal of blocks or units experimental per tre
... Show MoreIn this study, the performance of the adaptive optics (AO) system was analyzed through a numerical computer simulation implemented in MATLAB. Making a phase screen involved turning computer-generated random numbers into two-dimensional arrays of phase values on a sample point grid with matching statistics. Von Karman turbulence was created depending on the power spectral density. Several simulated point spread functions (PSFs) and modulation transfer functions (MTFs) for different values of the Fried coherent diameter (ro) were used to show how rough the atmosphere was. To evaluate the effectiveness of the optical system (telescope), the Strehl ratio (S) was computed. The compensation procedure for an AO syst
... Show MoreAlongside the development of high-speed rail, rail flaw detection is of great importance to ensure railway safety, especially for improving the speed and load of the train. Several conventional inspection methods such as visual, acoustic, and electromagnetic inspection have been introduced in the past. However, these methods have several challenges in terms of detection speed and accuracy. Combined inspection methods have emerged as a promising approach to overcome these limitations. Nondestructive testing (NDT) techniques in conjunction with artificial intelligence approaches have tremendous potential and viability because it is highly possible to improve the detection accuracy which has been proven in various conventional nondestr
... Show MoreNon-orthogonal Multiple Access (NOMA) is a multiple-access technique allowing multiusers to share the same communication resources, increasing spectral efficiency and throughput. NOMA has been shown to provide significant performance gains over orthogonal multiple access (OMA) regarding spectral efficiency and throughput. In this paper, two scenarios of NOMA are analyzed and simulated, involving two users and multiple users (four users) to evaluate NOMA's performance. The simulated results indicate that the achievable sum rate for the two users’ scenarios is 16.7 (bps/Hz), while for the multi-users scenario is 20.69 (bps/Hz) at transmitted power of 25 dBm. The BER for two users’ scenarios is 0.004202 and 0.001564 for
... Show MorePhishing is an internet crime achieved by imitating a legitimate website of a host in order to steal confidential information. Many researchers have developed phishing classification models that are limited in real-time and computational efficiency. This paper presents an ensemble learning model composed of DTree and NBayes, by STACKING method, with DTree as base learner. The aim is to combine the advantages of simplicity and effectiveness of DTree with the lower complexity time of NBayes. The models were integrated and appraised independently for data training and the probabilities of each class were averaged by their accuracy on the trained data through testing process. The present results of the empirical study on phishing websi
... Show MoreKrawtchouk polynomials (KPs) and their moments are promising techniques for applications of information theory, coding theory, and signal processing. This is due to the special capabilities of KPs in feature extraction and classification processes. The main challenge in existing KPs recurrence algorithms is that of numerical errors, which occur during the computation of the coefficients in large polynomial sizes, particularly when the KP parameter (p) values deviate away from 0.5 to 0 and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefficients of KPs in high orders. In particular, this paper discusses the development of a new algorithm and presents a new mathematical model for computing the
... Show More