In this article we study the variance estimator for the normal distribution when the mean is un known depend of the cumulative function between unbiased estimator and Bays estimator for the variance of normal distribution which is used include Double Stage Shrunken estimator to obtain higher efficiency for the variance estimator of normal distribution when the mean is unknown by using small volume equal volume of two sample .
This paper is concerned with pre-test single and double stage shrunken estimators for the mean (?) of normal distribution when a prior estimate (?0) of the actule value (?) is available, using specifying shrinkage weight factors ?(?) as well as pre-test region (R). Expressions for the Bias [B(?)], mean squared error [MSE(?)], Efficiency [EFF(?)] and Expected sample size [E(n/?)] of proposed estimators are derived. Numerical results and conclusions are drawn about selection different constants included in these expressions. Comparisons between suggested estimators, with respect to classical estimators in the sense of Bias and Relative Efficiency, are given. Furthermore, comparisons with the earlier existing works are drawn.
This work, deals with Kumaraswamy distribution. Kumaraswamy (1976, 1978) showed well known probability distribution functions such as the normal, beta and log-normal but in (1980) Kumaraswamy developed a more general probability density function for double bounded random processes, which is known as Kumaraswamy’s distribution. Classical maximum likelihood and Bayes methods estimator are used to estimate the unknown shape parameter (b). Reliability function are obtained using symmetric loss functions by using three types of informative priors two single priors and one double prior. In addition, a comparison is made for the performance of these estimators with respect to the numerical solution which are found using expansion method. The
... Show MoreIn this paper, we investigate the connection between the hierarchical models and the power prior distribution in quantile regression (QReg). Under specific quantile, we develop an expression for the power parameter ( ) to calibrate the power prior distribution for quantile regression to a corresponding hierarchical model. In addition, we estimate the relation between the and the quantile level via hierarchical model. Our proposed methodology is illustrated with real data example.
In this paper, two parameters for the Exponential distribution were estimated using the
Bayesian estimation method under three different loss functions: the Squared error loss function,
the Precautionary loss function, and the Entropy loss function. The Exponential distribution prior
and Gamma distribution have been assumed as the priors of the scale γ and location δ parameters
respectively. In Bayesian estimation, Maximum likelihood estimators have been used as the initial
estimators, and the Tierney-Kadane approximation has been used effectively. Based on the MonteCarlo
simulation method, those estimators were compared depending on the mean squared errors (MSEs).The results showed that the Bayesian esti
This paper is concerned with preliminary test single stage shrinkage estimators for the mean (q) of normal distribution with known variance s2 when a prior estimate (q0) of the actule value (q) is available, using specifying shrinkage weight factor y( ) as well as pre-test region (R). Expressions for the Bias, Mean Squared Error [MSE( )] and Relative Efficiency [R.Eff.( )] of proposed estimators are derived. Numerical results and conclusions are drawn about selection different constants including in these expressions. Comparisons between suggested estimators with respect to usual estimators in the sense of Relative Efficiency are given. Furthermore, comparisons with the earlier existi
... Show MoreIn this paper, the maximum likelihood estimator and the Bayes estimator of the reliability function for negative exponential distribution has been derived, then a Monte –Carlo simulation technique was employed to compare the performance of such estimators. The integral mean square error (IMSE) was used as a criterion for this comparison. The simulation results displayed that the Bayes estimator performed better than the maximum likelihood estimator for different samples sizes.