In this paper, we investigate the connection between the hierarchical models and the power prior distribution in quantile regression (QReg). Under specific quantile, we develop an expression for the power parameter ( ) to calibrate the power prior distribution for quantile regression to a corresponding hierarchical model. In addition, we estimate the relation between the and the quantile level via hierarchical model. Our proposed methodology is illustrated with real data example.
In this paper, some estimators for the unknown shape parameters and reliability function of Basic Gompertz distribution were obtained, such as Maximum likelihood estimator and some Bayesian estimators under Squared log error loss function by using Gamma and Jefferys priors. Monte-Carlo simulation was conducted to compare the performance of all estimates of the shape parameter and Reliability function, based on mean squared errors (MSE) and integrated mean squared errors (IMSE's), respectively. Finally, the discussion is provided to illustrate the results that are summarized in tables.
In this paper, two parameters for the Exponential distribution were estimated using the
Bayesian estimation method under three different loss functions: the Squared error loss function,
the Precautionary loss function, and the Entropy loss function. The Exponential distribution prior
and Gamma distribution have been assumed as the priors of the scale γ and location δ parameters
respectively. In Bayesian estimation, Maximum likelihood estimators have been used as the initial
estimators, and the Tierney-Kadane approximation has been used effectively. Based on the MonteCarlo
simulation method, those estimators were compared depending on the mean squared errors (MSEs).The results showed that the Bayesian esti
This paper deals with, Bayesian estimation of the parameters of Gamma distribution under Generalized Weighted loss function, based on Gamma and Exponential priors for the shape and scale parameters, respectively. Moment, Maximum likelihood estimators and Lindley’s approximation have been used effectively in Bayesian estimation. Based on Monte Carlo simulation method, those estimators are compared in terms of the mean squared errors (MSE’s).
The research aims to identify the theoretical foundations for measuring and analyzing quality costs and continuous improvement, as well as measuring and analyzing quality costs for the Directorate of Electricity Supply / Middle Euphrates and continuous improvement of the distribution of electrical energy,The problem was represented by the high costs of failure and waste in electrical energy result to the excesses on the network and the missing (lost) energy,Thus, measuring and analyzing quality costs for the distribution of electrical energy and identifying continuous improvement leads to a reduction in missing and an increase in sales, as the research reached many conclusions, the most important of which is the high percentage o
... Show MoreIn this paper, an estimate has been made for parameters and the reliability function for Transmuted power function (TPF) distribution through using some estimation methods as proposed new technique for white, percentile, least square, weighted least square and modification moment methods. A simulation was used to generate random data that follow the (TPF) distribution on three experiments (E1 , E2 , E3) of the real values of the parameters, and with sample size (n=10,25,50 and 100) and iteration samples (N=1000), and taking reliability times (0< t < 0) . Comparisons have been made between the obtained results from the estimators using mean square error (MSE). The results showed the
... Show MoreThis research includes the study of dual data models with mixed random parameters, which contain two types of parameters, the first is random and the other is fixed. For the random parameter, it is obtained as a result of differences in the marginal tendencies of the cross sections, and for the fixed parameter, it is obtained as a result of differences in fixed limits, and random errors for each section. Accidental bearing the characteristic of heterogeneity of variance in addition to the presence of serial correlation of the first degree, and the main objective in this research is the use of efficient methods commensurate with the paired data in the case of small samples, and to achieve this goal, the feasible general least squa
... Show MoreThis is an empirical investigation of the tribal power in Iraq and its consequences on the socio-political system. A theoretical background concerning thestate kinship, tribe and tribal involvement in politics has been displayed with example of tribal power over people within the social context. Socio-anthropological method of data collection has been used, including a semi-structured interview with a sample of 120 correspondents. The outcome revealed that the feeble and corrupted state (government) play a vital role in encouraging the tribe to be dominant. The people of Iraq are clinging to the tribe regardless of whether they believe in it or not. Although they are aware that the tribe is a pre-state organisation and marred shape of ci
... Show MoreAbstract
Binary logistic regression model used in data classification and it is the strongest most flexible tool in study cases variable response binary when compared to linear regression. In this research, some classic methods were used to estimate parameters binary logistic regression model, included the maximum likelihood method, minimum chi-square method, weighted least squares, with bayes estimation , to choose the best method of estimation by default values to estimate parameters according two different models of general linear regression models ,and different s
... Show MoreThe logistic regression model is one of the oldest and most common of the regression models, and it is known as one of the statistical methods used to describe and estimate the relationship between a dependent random variable and explanatory random variables. Several methods are used to estimate this model, including the bootstrap method, which is one of the estimation methods that depend on the principle of sampling with return, and is represented by a sample reshaping that includes (n) of the elements drawn by randomly returning from (N) from the original data, It is a computational method used to determine the measure of accuracy to estimate the statistics, and for this reason, this method was used to find more accurate estimates. The ma
... Show MoreSome experiments need to know the extent of their usefulness to continue providing them or not. This is done through the fuzzy regression discontinuous model, where the Epanechnikov Kernel and Triangular Kernel were used to estimate the model by generating data from the Monte Carlo experiment and comparing the results obtained. It was found that the. Epanechnikov Kernel has a least mean squared error.