The Estimation Of The Reliability Function Depends On The Accuracy Of The Data Used To Estimate The Parameters Of The Probability distribution, and Because Some Data Suffer from a Skew in their Data to Estimate the Parameters and Calculate the Reliability Function in light of the Presence of Some Skew in the Data, there must be a Distribution that has flexibility in dealing with that Data. As in the data of Diyala Company for Electrical Industries, as it was observed that there was a positive twisting in the data collected from the Power and Machinery Department, which required distribution that deals with those data and searches for methods that accommodate this problem and lead to accurate estimates of the reliability function, The Research Aims to Use The Method Of Moment To Estimate The Reliability Function for Truncated skew-normal Distribution, As This Distribution Represents a Parameterized Distribution That is Characterized By flexibility in dealing with data that is Distributed Normally and Shows some Skewness. From the values defined in the sample space, this means that a cut (Truncated) will be made from the left side in the Skew Normal Distribution and a new Distribution is Derived from the original Skew Distribution that achieves the characteristics of the Skew normal distribution function. Also, real data representing the operating times of three machines until the failure occurred were collected from The Capacity Department of Diyala Company for Electrical Industries, where the results showed that the machines under study have a good reliability index and that the machines can be relied upon at a high rate if they continue to work under the same current working conditions.
In this paper ,the problem of point estimation for the two parameters of logistic distribution has been investigated using simulation technique. The rank sampling set estimator method which is one of the Non_Baysian procedure and Lindley approximation estimator method which is one of the Baysian method were used to estimate the parameters of logistic distribution. Comparing between these two mentioned methods by employing mean square error measure and mean absolute percentage error measure .At last simulation technique used to generate many number of samples sizes to compare between these methods.
In this study, we used Bayesian method to estimate scale parameter for the normal distribution. By considering three different prior distributions such as the square root inverted gamma (SRIG) distribution and the non-informative prior distribution and the natural conjugate family of priors. The Bayesian estimation based on squared error loss function, and compared it with the classical estimation methods to estimate the scale parameter for the normal distribution, such as the maximum likelihood estimation and th
... Show MoreThis research aims to review the importance of estimating the nonparametric regression function using so-called Canonical Kernel which depends on re-scale the smoothing parameter, which has a large and important role in Kernel and give the sound amount of smoothing .
We has been shown the importance of this method through the application of these concepts on real data refer to international exchange rates to the U.S. dollar against the Japanese yen for the period from January 2007 to March 2010. The results demonstrated preference the nonparametric estimator with Gaussian on the other nonparametric and parametric regression estima
... Show MoreAbstract
We produced a study in Estimation for Reliability of the Exponential distribution based on the Bayesian approach. These estimates are derived using Bayesian approaches. In the Bayesian approach, the parameter of the Exponential distribution is assumed to be random variable .we derived bayes estimators of reliability under four types when the prior distribution for the scale parameter of the Exponential distribution is: Inverse Chi-squar
... Show MoreIn this paper an estimator of reliability function for the pareto dist. Of the first kind has been derived and then a simulation approach by Monte-Calro method was made to compare the Bayers estimator of reliability function and the maximum likelihood estimator for this function. It has been found that the Bayes. estimator was better than maximum likelihood estimator for all sample sizes using Integral mean square error(IMSE).
This paper discusses reliability R of the (2+1) Cascade model of inverse Weibull distribution. Reliability is to be found when strength-stress distributed is inverse Weibull random variables with unknown scale parameter and known shape parameter. Six estimation methods (Maximum likelihood, Moment, Least Square, Weighted Least Square, Regression and Percentile) are used to estimate reliability. There is a comparison between six different estimation methods by the simulation study by MATLAB 2016, using two statistical criteria Mean square error and Mean Absolute Percentage Error, where it is found that best estimator between the six estimators is Maximum likelihood estimation method.
In this research, we dealt with the study of the Non-Homogeneous Poisson process, which is one of the most important statistical issues that have a role in scientific development as it is related to accidents that occur in reality, which are modeled according to Poisson’s operations, because the occurrence of this accident is related to time, whether with the change of time or its stability. In our research, this clarifies the Non-Homogeneous hemispheric process and the use of one of these models of processes, which is an exponentiated - Weibull model that contains three parameters (α, β, σ) as a function to estimate the time rate of occurrence of earthquakes in Erbil Governorate, as the governorate is adjacent to two countr
... Show MoreA condense study was done to compare between the ordinary estimators. In particular the maximum likelihood estimator and the robust estimator, to estimate the parameters of the mixed model of order one, namely ARMA(1,1) model.
Simulation study was done for a varieties the model. using: small, moderate and large sample sizes, were some new results were obtained. MAPE was used as a statistical criterion for comparison.
Survival analysis is the analysis of data that are in the form of times from the origin of time until the occurrence of the end event, and in medical research, the origin of time is the date of registration of the individual or the patient in a study such as clinical trials to compare two types of medicine or more if the endpoint It is the death of the patient or the disappearance of the individual. The data resulting from this process is called survival times. But if the end is not death, the resulting data is called time data until the event. That is, survival analysis is one of the statistical steps and procedures for analyzing data when the adopted variable is time to event and time. It could be d
... Show MoreA group of acceptance sampling to testing the products was designed when the life time of an item follows a log-logistics distribution. The minimum number of groups (k) required for a given group size and acceptance number is determined when various values of Consumer’s Risk and test termination time are specified. All the results about these sampling plan and probability of acceptance were explained with tables.