This paper considers and proposes new estimators that depend on the sample and on prior information in the case that they either are equally or are not equally important in the model. The prior information is described as linear stochastic restrictions. We study the properties and the performances of these estimators compared to other common estimators using the mean squared error as a criterion for the goodness of fit. A numerical example and a simulation study are proposed to explain the performance of the estimators.
Mixed convection heat transfer to air inside an enclosure is investigated experimentally. The bottom wall of the enclosure is maintained at higher temperature than that of the top wall which keeps in oscillation motion, whereas the left and right walls are well insulated. The differential temperature of the bottom and top walls changed several times in order to accurately characterize the temperature distribution over a considerable range of Richardson number. Adjustable aspect ratio box was built as a test rig to determine the effects of Richardson number and aspect ratio on the flow behavior of the air inside the enclosure. The flow fields and the average Nusselt number profiles were presented in this wo
... Show MoreThe logistic regression model regarded as the important regression Models ,where of the most interesting subjects in recent studies due to taking character more advanced in the process of statistical analysis .
The ordinary estimating methods is failed in dealing with data that consist of the presence of outlier values and hence on the absence of such that have undesirable effect on the result. &nbs
... Show MoreIn this paper, a fusion of K models of full-rank weighted nonnegative tensor factor two-dimensional deconvolution (K-wNTF2D) is proposed to separate the acoustic sources that have been mixed in an underdetermined reverberant environment. The model is adapted in an unsupervised manner under the hybrid framework of the generalized expectation maximization and multiplicative update algorithms. The derivation of the algorithm and the development of proposed full-rank K-wNTF2D will be shown. The algorithm also encodes a set of variable sparsity parameters derived from Gibbs distribution into the K-wNTF2D model. This optimizes each sub-model in K-wNTF2D with the required sparsity to model the time-varying variances of the sources in the s
... Show MoreIn this paper, the methods of weighted residuals: Collocation Method (CM), Least Squares Method (LSM) and Galerkin Method (GM) are used to solve the thin film flow (TFF) equation. The weighted residual methods were implemented to get an approximate solution to the TFF equation. The accuracy of the obtained results is checked by calculating the maximum error remainder functions (MER). Moreover, the outcomes were examined in comparison with the 4th-order Runge-Kutta method (RK4) and good agreements have been achieved. All the evaluations have been successfully implemented by using the computer system Mathematica®10.
In this research, the covariance estimates were used to estimate the population mean in the stratified random sampling and combined regression estimates. were compared by employing the robust variance-covariance matrices estimates with combined regression estimates by employing the traditional variance-covariance matrices estimates when estimating the regression parameter, through the two efficiency criteria (RE) and mean squared error (MSE). We found that robust estimates significantly improved the quality of combined regression estimates by reducing the effect of outliers using robust covariance and covariance matrices estimates (MCD, MVE) when estimating the regression parameter. In addition, the results of the simulation study proved
... Show MoreIn this work, a weighted H lder function that approximates a Jacobi polynomial which solves the second order singular Sturm-Liouville equation is discussed. This is generally equivalent to the Jacobean translations and the moduli of smoothness. This paper aims to focus on improving methods of approximation and finding the upper and lower estimates for the degree of approximation in weighted H lder spaces by modifying the modulus of continuity and smoothness. Moreover, some properties for the moduli of smoothness with direct and inverse results are considered.
This paper deals with, Bayesian estimation of the parameters of Gamma distribution under Generalized Weighted loss function, based on Gamma and Exponential priors for the shape and scale parameters, respectively. Moment, Maximum likelihood estimators and Lindley’s approximation have been used effectively in Bayesian estimation. Based on Monte Carlo simulation method, those estimators are compared in terms of the mean squared errors (MSE’s).
Maximum likelihood estimation method, uniformly minimum variance unbiased estimation method and minimum mean square error estimation, as classical estimation procedures, are frequently used for parameter estimation in statistics, which assuming the parameter is constant , while Bayes method assuming the parameter is random variable and hence the Bayes estimator is an estimator which minimize the Bayes risk for each value the random observable and for square error lose function the Bayes estimator is the posterior mean. It is well known that the Bayesian estimation is hardly used as a parameter estimation technique due to some difficulties to finding a prior distribution.
The interest of this paper is that
... Show MoreThe internet, unlike other traditional means of communication, has a flexibility to stimulate the user and allows him to develop it. Perhaps, the reason for the superiority of the internet over other traditional means of communication is the possibility of change and transmission from one stage to another in a short period. This means that the internet is able to move from the use to the development of the use and then the development of means and innovation as the innovation of the internet is a logical product of the interaction of the user with the network. The internet invests all the proposals and ideas and does not ignore any even if it is simple. This is represented in social networking sites which in fact reflects personal emotio
... Show MoreAbstract
The analysis of Least Squares: LS is often unsuccessful in the case of outliers in the studied phenomena. OLS will lose their properties and then lose the property of Beast Linear Unbiased Estimator (BLUE), because of the Outliers have a bad effect on the phenomenon. To address this problem, new statistical methods have been developed so that they are not easily affected by outliers. These methods are characterized by robustness or (resistance). The Least Trimmed Squares: LTS method was therefore a good alternative to achieving more feasible results and optimization. However, it is possible to assume weights that take into consideration the location of the outliers in the data and det
... Show More