A Bayesian formulation of the ridge regression problem is considerd, which derives from a direct specification of prior informations about parameters of general linear regression model when data suffer from a high degree of multicollinearity.A new approach for deriving the conventional estimator for the ridge parameter proposed by Hoerl and Kennard (1970) as well as Bayesian estimator are presented. A numerical example is studied in order to compare the performance of these estimators.
In this paper, Response Surface Method (RSM) is utilized to carry out an investigation of the impact of input parameters: electrode type (E.T.) [Gr, Cu and CuW], pulse duration of current (Ip), pulse duration on time (Ton), and pulse duration off time (Toff) on the surface finish in EDM operation. To approximate and concentrate the suggested second- order regression model is generally accepted for Surface Roughness Ra, a Central Composite Design (CCD) is utilized for evaluating the model constant coefficients of the input parameters on Surface Roughness (Ra). Examinations were performed on AISI D2 tool steel. The important coefficients are gotten by achieving successfully an Analysis of V
... Show MoreAbstract
This research aim to overcome the problem of dimensionality by using the methods of non-linear regression, which reduces the root of the average square error (RMSE), and is called the method of projection pursuit regression (PPR), which is one of the methods for reducing dimensions that work to overcome the problem of dimensionality (curse of dimensionality), The (PPR) method is a statistical technique that deals with finding the most important projections in multi-dimensional data , and With each finding projection , the data is reduced by linear compounds overall the projection. The process repeated to produce good projections until the best projections are obtained. The main idea of the PPR is to model
... Show MoreA mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the
... Show MoreA mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the others
... Show MoreAbstract
The Non - Homogeneous Poisson process is considered as one of the statistical subjects which had an importance in other sciences and a large application in different areas as waiting raws and rectifiable systems method , computer and communication systems and the theory of reliability and many other, also it used in modeling the phenomenon that occurred by unfixed way over time (all events that changed by time).
This research deals with some of the basic concepts that are related to the Non - Homogeneous Poisson process , This research carried out two models of the Non - Homogeneous Poisson process which are the power law model , and Musa –okumto , to estimate th
... Show MoreThe stress(Y) – strength(X) model reliability Bayesian estimation which defines life of a component with strength X and stress Y (the component fails if and only if at any time the applied stress is greater than its strength) has been studied, then the reliability; R=P(Y<X), can be considered as a measure of the component performance. In this paper, a Bayesian analysis has been considered for R when the two variables X and Y are independent Weibull random variables with common parameter α in order to study the effect of each of the two different scale parameters β and λ; respectively, using three different [weighted, quadratic and entropy] loss functions under two different prior functions [Gamma and extension of Jeffery
... Show MoreThe objective of this study is to examine the properties of Bayes estimators of the shape parameter of the Power Function Distribution (PFD-I), by using two different prior distributions for the parameter θ and different loss functions that were compared with the maximum likelihood estimators. In many practical applications, we may have two different prior information about the prior distribution for the shape parameter of the Power Function Distribution, which influences the parameter estimation. So, we used two different kinds of conjugate priors of shape parameter θ of the <
... Show MoreIt is the regression analysis is the foundation stone of knowledge of statistics , which mostly depends on the ordinary least square method , but as is well known that the way the above mentioned her several conditions to operate accurately and the results can be unreliable , add to that the lack of certain conditions make it impossible to complete the work and analysis method and among those conditions are the multi-co linearity problem , and we are in the process of detected that problem between the independent variables using farrar –glauber test , in addition to the requirement linearity data and the lack of the condition last has been resorting to the
... Show MoreSemiparametric methods combined parametric methods and nonparametric methods ,it is important in most of studies which take in it's nature more progress in the procedure of accurate statistical analysis which aim getting estimators efficient, the partial linear regression model is considered the most popular type of semiparametric models, which consisted of parametric component and nonparametric component in order to estimate the parametric component that have certain properties depend on the assumptions concerning the parametric component, where the absence of assumptions, parametric component will have several problems for example multicollinearity means (explanatory variables are interrelated to each other) , To treat this problem we use
... Show MoreIn this paper, a new method of selection variables is presented to select some essential variables from large datasets. The new model is a modified version of the Elastic Net model. The modified Elastic Net variable selection model has been summarized in an algorithm. It is applied for Leukemia dataset that has 3051 variables (genes) and 72 samples. In reality, working with this kind of dataset is not accessible due to its large size. The modified model is compared to some standard variable selection methods. Perfect classification is achieved by applying the modified Elastic Net model because it has the best performance. All the calculations that have been done for this paper are in