The support vector machine, also known as SVM, is a type of supervised learning model that can be used for classification or regression depending on the datasets. SVM is used to classify data points by determining the best hyperplane between two or more groups. Working with enormous datasets, on the other hand, might result in a variety of issues, including inefficient accuracy and time-consuming. SVM was updated in this research by applying some non-linear kernel transformations, which are: linear, polynomial, radial basis, and multi-layer kernels. The non-linear SVM classification model was illustrated and summarized in an algorithm using kernel tricks. The proposed method was examined using three simulation datasets with different sample sizes (50, 100, 200). A comparison between non-linear SVM and two standard classification methods was illustrated using various compared features. Our study has shown that the non-linear SVM method gives better results by checking: sensitivity, specificity, accuracy, and time-consuming. © 2024 Author(s).
Data generated from modern applications and the internet in healthcare is extensive and rapidly expanding. Therefore, one of the significant success factors for any application is understanding and extracting meaningful information using digital analytics tools. These tools will positively impact the application's performance and handle the challenges that can be faced to create highly consistent, logical, and information-rich summaries. This paper contains three main objectives: First, it provides several analytics methodologies that help to analyze datasets and extract useful information from them as preprocessing steps in any classification model to determine the dataset characteristics. Also, this paper provides a comparative st
... Show MoreThis study proposes a mathematical approach and numerical experiment for a simple solution of cardiac blood flow to the heart's blood vessels. A mathematical model of human blood flow through arterial branches was studied and calculated using the Navier-Stokes partial differential equation with finite element analysis (FEA) approach. Furthermore, FEA is applied to the steady flow of two-dimensional viscous liquids through different geometries. The validity of the computational method is determined by comparing numerical experiments with the results of the analysis of different functions. Numerical analysis showed that the highest blood flow velocity of 1.22 cm/s occurred in the center of the vessel which tends to be laminar and is influe
... Show MoreOf non-Muslim minorities In the Muslim community
Multivariate Non-Parametric control charts were used to monitoring the data that generated by using the simulation, whether they are within control limits or not. Since that non-parametric methods do not require any assumptions about the distribution of the data. This research aims to apply the multivariate non-parametric quality control methods, which are Multivariate Wilcoxon Signed-Rank ( ) , kernel principal component analysis (KPCA) and k-nearest neighbor ( −
synthesis, Composition, Spectral, Geometry and Antibacterial Applications ofMn(||),Ni(||),Co(||),Cu(||) and Hg(||) schiff Base complexes of N2O2 mixed donor with 1,10-phenanthroline
The logistic regression model is an important statistical model showing the relationship between the binary variable and the explanatory variables. The large number of explanations that are usually used to illustrate the response led to the emergence of the problem of linear multiplicity between the explanatory variables that make estimating the parameters of the model not accurate.
... Show MoreIn this paper, a new analytical method is introduced to find the general solution of linear partial differential equations. In this method, each Laplace transform (LT) and Sumudu transform (ST) is used independently along with canonical coordinates. The strength of this method is that it is easy to implement and does not require initial conditions.
Image compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless com
... Show More