The support vector machine, also known as SVM, is a type of supervised learning model that can be used for classification or regression depending on the datasets. SVM is used to classify data points by determining the best hyperplane between two or more groups. Working with enormous datasets, on the other hand, might result in a variety of issues, including inefficient accuracy and time-consuming. SVM was updated in this research by applying some non-linear kernel transformations, which are: linear, polynomial, radial basis, and multi-layer kernels. The non-linear SVM classification model was illustrated and summarized in an algorithm using kernel tricks. The proposed method was examined using three simulation datasets with different sample sizes (50, 100, 200). A comparison between non-linear SVM and two standard classification methods was illustrated using various compared features. Our study has shown that the non-linear SVM method gives better results by checking: sensitivity, specificity, accuracy, and time-consuming. © 2024 Author(s).
Nowadays, it is convenient for us to use a search engine to get our needed information. But sometimes it will misunderstand the information because of the different media reports. The Recommender System (RS) is popular to use for every business since it can provide information for users that will attract more revenues for companies. But also, sometimes the system will recommend unneeded information for users. Because of this, this paper provided an architecture of a recommender system that could base on user-oriented preference. This system is called UOP-RS. To make the UOP-RS significantly, this paper focused on movie theatre information and collect the movie database from the IMDb website that provides informatio
... Show More
Regression testing is a crucial phase in the software development lifecycle that makes sure that new changes/updates in the software system don’t introduce defects or don’t affect adversely the existing functionalities. However, as the software systems grow in complexity, the number of test cases in regression suite can become large which results into more testing time and resource consumption. In addition, the presence of redundant and faulty test cases may affect the efficiency of the regression testing process. Therefore, this paper presents a new Hybrid Framework to Exclude Similar & Faulty Test Cases in Regression Testing (ETCPM) that utilizes automated code analysis techniques and historical test execution data to
... Show MoreTriticale is a hybrid of wheat and rye grown for use as animal feed. In Florida, due to its soft coat, triticale is highly vulnerable to Sitophilus oryzae L. (rice weevil) and there is interest in development of methods to detect early-instar larvae so that infestations can be targeted before they become economically damaging. The objective of this study was to develop prediction models of the infestation degree for triticale seed infested with rice weevils of different growth stages. Spectral signatures were tested as a method to detect rice weevils in triticale seed. Groups of seeds at 11 different levels (degrees) of infestation, 0–62%, were obtained by combining different ratios of infested and uninfested seeds. A spectrophotometer wa
... Show MoreIn this paper, we will study non parametric model when the response variable have missing data (non response) in observations it under missing mechanisms MCAR, then we suggest Kernel-Based Non-Parametric Single-Imputation instead of missing value and compare it with Nearest Neighbor Imputation by using the simulation about some difference models and with difference cases as the sample size, variance and rate of missing data.
In this study, different methods were used for estimating location parameter and scale parameter for extreme value distribution, such as maximum likelihood estimation (MLE) , method of moment estimation (ME),and approximation estimators based on percentiles which is called white method in estimation, as the extreme value distribution is one of exponential distributions. Least squares estimation (OLS) was used, weighted least squares estimation (WLS), ridge regression estimation (Rig), and adjusted ridge regression estimation (ARig) were used. Two parameters for expected value to the percentile as estimation for distribution f
... Show MoreThermal performance of closed wet cooling tower has been investigated experimentally and theoretically
in this work. The theoretical model based on heat and mass transfer equations and heat and mass transfer balance equations which are established for steady state case. A new small indirect cooling tower was used for conducting experiments. The cooling capacity of cooling tower is 1 kW for an inlet water temperature of 38oC, a water mass velocity 2.3 kg/m2.s and an air wet bulb temperature of 26oC. This study investigates the relationship between saturation efficiency, cooling capacity and coefficient of performance of closed wet cooling tower versus different operating parameters such wet-bulb temperature, variable air-spray water fl
Previously, many empirical models have been used to predict corrosion rates under different CO2 corrosion parameters conditions. Most of these models did not predict the corrosion rate exactly, besides it determined effects of variables by holding some variables constant and changing the values of other variables to obtain the regression model. As a result the experiments will be large and cost too much. In this paper response surface methodology (RSM) was proposed to optimize the experiments and reduce the experimental running. The experiments studied effects of temperature (40 – 60 °C), pH (3-5), acetic acid (HAc) concentration (1000-3000 ppm) and rotation speed (1000-1500 rpm) on CO2 corrosion performance of t
... Show MoreThe Dagum Regression Model, introduced to address limitations in traditional econometric models, provides enhanced flexibility for analyzing data characterized by heavy tails and asymmetry, which is common in income and wealth distributions. This paper develops and applies the Dagum model, demonstrating its advantages over other distributions such as the Log-Normal and Gamma distributions. The model's parameters are estimated using Maximum Likelihood Estimation (MLE) and the Method of Moments (MoM). A simulation study evaluates both methods' performance across various sample sizes, showing that MoM tends to offer more robust and precise estimates, particularly in small samples. These findings provide valuable insights into the ana
... Show MoreWe propose a new method for detecting the abnormality in cerebral tissues present within Magnetic Resonance Images (MRI). Present classifier is comprised of cerebral tissue extraction, image division into angular and distance span vectors, acquirement of four features for each portion and classification to ascertain the abnormality location. The threshold value and region of interest are discerned using operator input and Otsu algorithm. Novel brain slices image division is introduced via angular and distance span vectors of sizes 24˚ with 15 pixels. Rotation invariance of the angular span vector is determined. An automatic image categorization into normal and abnormal brain tissues is performed using Support Vector Machine (SVM). St
... Show MoreThis research presents a new algorithm for classification the
shadow and water bodies for high-resolution satellite images (4-
meter) of Baghdad city, have been modulated the equations of the
color space components C1-C2-C3. Have been using the color space
component C3 (blue) for discriminating the shadow, and has been
used C1 (red) to detect the water bodies (river). The new technique
was successfully tested on many images of the Google earth and
Ikonos. Experimental results show that this algorithm effective to
detect all the types of the shadows with color, and also detects the
water bodies in another color. The benefit of this new technique to
discriminate between the shadows and water in fast Matlab pro