This research aims to analyze and simulate biochemical real test data for uncovering the relationships among the tests, and how each of them impacts others. The data were acquired from Iraqi private biochemical laboratory. However, these data have many dimensions with a high rate of null values, and big patient numbers. Then, several experiments have been applied on these data beginning with unsupervised techniques such as hierarchical clustering, and k-means, but the results were not clear. Then the preprocessing step performed, to make the dataset analyzable by supervised techniques such as Linear Discriminant Analysis (LDA), Classification And Regression Tree (CART), Logistic Regression (LR), K-Nearest Neighbor (K-NN), Naïve Bays (NB
... Show MoreAbstract:
The great importance that distinguish these factorial experiments made them subject a desirable for use and application in many fields, particularly in the field of agriculture, which is considered the broad area for experimental designs applications.
And the second case for the factorial experiment, which faces researchers have great difficulty in dealing with the case unbalance we mean that frequencies treatments factorial are not equal meaning (that is allocated a number unequal of blocks or units experimental per tre
... Show MoreAlongside the development of high-speed rail, rail flaw detection is of great importance to ensure railway safety, especially for improving the speed and load of the train. Several conventional inspection methods such as visual, acoustic, and electromagnetic inspection have been introduced in the past. However, these methods have several challenges in terms of detection speed and accuracy. Combined inspection methods have emerged as a promising approach to overcome these limitations. Nondestructive testing (NDT) techniques in conjunction with artificial intelligence approaches have tremendous potential and viability because it is highly possible to improve the detection accuracy which has been proven in various conventional nondestr
... Show MoreNon-orthogonal Multiple Access (NOMA) is a multiple-access technique allowing multiusers to share the same communication resources, increasing spectral efficiency and throughput. NOMA has been shown to provide significant performance gains over orthogonal multiple access (OMA) regarding spectral efficiency and throughput. In this paper, two scenarios of NOMA are analyzed and simulated, involving two users and multiple users (four users) to evaluate NOMA's performance. The simulated results indicate that the achievable sum rate for the two users’ scenarios is 16.7 (bps/Hz), while for the multi-users scenario is 20.69 (bps/Hz) at transmitted power of 25 dBm. The BER for two users’ scenarios is 0.004202 and 0.001564 for
... Show MorePhishing is an internet crime achieved by imitating a legitimate website of a host in order to steal confidential information. Many researchers have developed phishing classification models that are limited in real-time and computational efficiency. This paper presents an ensemble learning model composed of DTree and NBayes, by STACKING method, with DTree as base learner. The aim is to combine the advantages of simplicity and effectiveness of DTree with the lower complexity time of NBayes. The models were integrated and appraised independently for data training and the probabilities of each class were averaged by their accuracy on the trained data through testing process. The present results of the empirical study on phishing websi
... Show MoreKrawtchouk polynomials (KPs) and their moments are promising techniques for applications of information theory, coding theory, and signal processing. This is due to the special capabilities of KPs in feature extraction and classification processes. The main challenge in existing KPs recurrence algorithms is that of numerical errors, which occur during the computation of the coefficients in large polynomial sizes, particularly when the KP parameter (p) values deviate away from 0.5 to 0 and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefficients of KPs in high orders. In particular, this paper discusses the development of a new algorithm and presents a new mathematical model for computing the
... Show MoreVisual analytics becomes an important approach for discovering patterns in big data. As visualization struggles from high dimensionality of data, issues like concept hierarchy on each dimension add more difficulty and make visualization a prohibitive task. Data cube offers multi-perspective aggregated views of large data sets and has important applications in business and many other areas. It has high dimensionality, concept hierarchy, vast number of cells, and comes with special exploration operations such as roll-up, drill-down, slicing and dicing. All these issues make data cubes very difficult to visually explore. Most existing approaches visualize a data cube in 2D space and require preprocessing steps. In this paper, we propose a visu
... Show More