It is the regression analysis is the foundation stone of knowledge of statistics , which mostly depends on the ordinary least square method , but as is well known that the way the above mentioned her several conditions to operate accurately and the results can be unreliable , add to that the lack of certain conditions make it impossible to complete the work and analysis method and among those conditions are the multi-co linearity problem , and we are in the process of detected that problem between the independent variables using farrar –glauber test , in addition to the requirement linearity data and the lack of the condition last has been resorting to the nonparametric regression and processor the problem using kernel ridge regression function and that depend on estimate band width ( smoothing parameter ) therefore has been resorting to two different ways to estimate the parameter and are Rule of thumb (RULE) and Bootstrap (BOOT) and comparison between those ways using the style of simulation
The analysis of survival and reliability considered of topics and methods of vital statistics at the present time because of their importance in the various demographical, medical, industrial and engineering fields. This research focused generate random data for samples from the probability distribution Generalized Gamma: GG, known as: "Inverse Transformation" Method: ITM, which includes the distribution cycle integration function incomplete Gamma integration making it more difficult classical estimation so will be the need to illustration to the method of numerical approximation and then appreciation of the function of survival function. It was estimated survival function by simulation the way "Monte Carlo". The Entropy method used for the
... Show MoreAbstract
The methods of the Principal Components and Partial Least Squares can be regard very important methods in the regression analysis, whe
... Show MoreThe penalized least square method is a popular method to deal with high dimensional data ,where the number of explanatory variables is large than the sample size . The properties of penalized least square method are given high prediction accuracy and making estimation and variables selection
At once. The penalized least square method gives a sparse model ,that meaning a model with small variables so that can be interpreted easily .The penalized least square is not robust ,that means very sensitive to the presence of outlying observation , to deal with this problem, we can used a robust loss function to get the robust penalized least square method ,and get robust penalized estimator and
... Show MoreThis article aims to estimate the partially linear model by using two methods, which are the Wavelet and Kernel Smoothers. Simulation experiments are used to study the small sample behavior depending on different functions, sample sizes, and variances. Results explained that the wavelet smoother is the best depending on the mean average squares error criterion for all cases that used.
objectives: To investigate the polyomaviruses (BK, JC) in asymptomatic kidney transplant recipients and healthy persons as control. It is one of the first reports on serological detection and molecular characterization that describes the circulation of polyomaviruses (BKV, JCV) have been done in Iraq recently. Methodology: The present study was designed as prospective case control study was done during the period from November 2015 to August 2016. Total of 97 serum and urine samples were collected randomly from 25 healthy control person and 72 renal transplant recipients, attending Iraqi Renal Transplantatio
Abstract
Characterized by the Ordinary Least Squares (OLS) on Maximum Likelihood for the greatest possible way that the exact moments are known , which means that it can be found, while the other method they are unknown, but approximations to their biases correct to 0(n-1) can be obtained by standard methods. In our research expressions for approximations to the biases of the ML estimators (the regression coefficients and scale parameter) for linear (type 1) Extreme Value Regression Model for Largest Values are presented by using the advanced approach depends on finding the first derivative, second and third.
Research includes three axes, the first is the average estimate time of achievement (day) to work oversight, to five supervisory departments in the Office of Financial Supervision Federal and then choose the three control outputs and at the level of each of the five departments above, and after analyzing the data statistically back to us that the distribution of the times of achievement It is the exponential distribution (Exponential Distribution) a parameter (q), and the distribution of normal (Normal Distribution) with two parameters (μ, σ2), and introduced four methods of parameter estimation (q) as well as four modalities parameter to estimate (
... Show MoreIn this paper we estimate the coefficients and scale parameter in linear regression model depending on the residuals are of type 1 of extreme value distribution for the largest values . This can be regard as an improvement for the studies with the smallest values . We study two estimation methods ( OLS & MLE ) where we resort to Newton – Raphson (NR) and Fisher Scoring methods to get MLE estimate because the difficulty of using the usual approach with MLE . The relative efficiency criterion is considered beside to the statistical inference procedures for the extreme value regression model of type 1 for largest values . Confidence interval , hypothesis testing for both scale parameter and regression coefficients
... Show More