Researchers need to understand the differences between parametric and nonparametric regression models and how they work with available information about the relationship between response and explanatory variables and the distribution of random errors. This paper proposes a new nonparametric regression function for the kernel and employs it with the Nadaraya-Watson kernel estimator method and the Gaussian kernel function. The proposed kernel function (AMS) is then compared to the Gaussian kernel and the traditional parametric method, the ordinary least squares method (OLS). The objective of this study is to examine the effectiveness of nonparametric regression and identify the best-performing model when employing the Nadaraya-Watson kernel estimator method with the proposed kernel function (AMS), the Gaussian kernel, and the ordinary least squares (OLS) method. Additionally, it determines which method yields the most accurate results when analyzing nonparametric regression models and provides valuable insights for practitioners looking to apply these techniques in real-world scenarios. However, criteria such as generalized cross-validation (GCV), mean square error (MSE), and coefficient determination are used to select the most efficient estimated model. Simulated data was used to evaluate the performance and efficiency of estimators using different sample sizes. The results favorable the simulation illustrate that the Nadaraya-Watson kernel estimator using the proposed kernel function (AMS) exhibited favorable and superior performance compared to other methods. The coefficients of determination indicate that the highest values attained were 98%, 99%, and 99%. The proposed function (AMS) yielded the lowest MSE and GCV values across all samples. Therefore, this suggests that the model can generate precise predictions and enhance the performance of the focused data.
Circular data (circular sightings) are periodic data and are measured on the unit's circle by radian or grades. They are fundamentally different from those linear data compatible with the mathematical representation of the usual linear regression model due to their cyclical nature. Circular data originate in a wide variety of fields of scientific, medical, economic and social life. One of the most important statistical methods that represents this data, and there are several methods of estimating angular regression, including teachers and non-educationalists, so the letter included the use of three models of angular regression, two of which are teaching models and one of which is a model of educators. ) (DM) (MLE) and circular shrinkage mod
... Show MoreIn general, researchers and statisticians in particular have been usually used non-parametric regression models when the parametric methods failed to fulfillment their aim to analyze the models precisely. In this case the parametic methods are useless so they turn to non-parametric methods for its easiness in programming. Non-parametric methods can also used to assume the parametric regression model for subsequent use. Moreover, as an advantage of using non-parametric methods is to solve the problem of Multi-Colinearity between explanatory variables combined with nonlinear data. This problem can be solved by using kernel ridge regression which depend o
... Show MoreIn this research, some robust non-parametric methods were used to estimate the semi-parametric regression model, and then these methods were compared using the MSE comparison criterion, different sample sizes, levels of variance, pollution rates, and three different models were used. These methods are S-LLS S-Estimation -local smoothing, (M-LLS)M- Estimation -local smoothing, (S-NW) S-Estimation-NadaryaWatson Smoothing, and (M-NW) M-Estimation-Nadarya-Watson Smoothing.
The results in the first model proved that the (S-LLS) method was the best in the case of large sample sizes, and small sample sizes showed that the
... Show MoreThe technology of reducing dimensions and choosing variables are very important topics in statistical analysis to multivariate. When two or more of the predictor variables are linked in the complete or incomplete regression relationships, a problem of multicollinearity are occurred which consist of the breach of one basic assumptions of the ordinary least squares method with incorrect estimates results.
There are several methods proposed to address this problem, including the partial least squares (PLS), used to reduce dimensional regression analysis. By using linear transformations that convert a set of variables associated with a high link to a set of new independent variables and unr
... Show MoreIn this paper, the Monte-Carlo simulation method was used to compare the robust circular S estimator with the circular Least squares method in the case of no outlier data and in the case of the presence of an outlier in the data through two trends, the first is contaminant with high inflection points that represents contaminant in the circular independent variable, and the second the contaminant in the vertical variable that represents the circular dependent variable using three comparison criteria, the median standard error (Median SE), the median of the mean squares of error (Median MSE), and the median of the mean cosines of the circular residuals (Median A(k)). It was concluded that the method of least squares is better than the
... Show MoreThis research aims to review the importance of estimating the nonparametric regression function using so-called Canonical Kernel which depends on re-scale the smoothing parameter, which has a large and important role in Kernel and give the sound amount of smoothing .
We has been shown the importance of this method through the application of these concepts on real data refer to international exchange rates to the U.S. dollar against the Japanese yen for the period from January 2007 to March 2010. The results demonstrated preference the nonparametric estimator with Gaussian on the other nonparametric and parametric regression estima
... Show MoreIn this paper, we will provide a proposed method to estimate missing values for the Explanatory variables for Non-Parametric Multiple Regression Model and compare it with the Imputation Arithmetic mean Method, The basis of the idea of this method was based on how to employ the causal relationship between the variables in finding an efficient estimate of the missing value, we rely on the use of the Kernel estimate by Nadaraya – Watson Estimator , and on Least Squared Cross Validation (LSCV) to estimate the Bandwidth, and we use the simulation study to compare between the two methods.
Maulticollinearity is a problem that always occurs when two or more predictor variables are correlated with each other. consist of the breach of one basic assumptions of the ordinary least squares method with biased estimates results, There are several methods which are proposed to handle this problem including the method To address a problem and method To address a problem , In this research a comparisons are employed between the biased method and unbiased method with Bayesian using Gamma distribution method addition to Ordinary Least Square metho
... Show MoreThis research discussed, the process of comparison between the regression model of partial least squares and tree regression, where these models included two types of statistical methods represented by the first type "parameter statistics" of the partial least squares, which is adopted when the number of variables is greater than the number of observations and also when the number of observations larger than the number of variables, the second type is the "nonparametric statistic" represented by tree regression, which is the division of data in a hierarchical way. The regression models for the two models were estimated, and then the comparison between them, where the comparison between these methods was according to a Mean Square
... Show MoreAbstract
The logistic regression model is one of the nonlinear models that aims at obtaining highly efficient capabilities, It also the researcher an idea of the effect of the explanatory variable on the binary response variable. &nb
... Show More