The research presents the reliability. It is defined as the probability of accomplishing any part of the system within a specified time and under the same circumstances. On the theoretical side, the reliability, the reliability function, and the cumulative function of failure are studied within the one-parameter Raleigh distribution. This research aims to discover many factors that are missed the reliability evaluation which causes constant interruptions of the machines in addition to the problems of data. The problem of the research is that there are many methods for estimating the reliability function but no one has suitable qualifications for most of these methods in the data such as the presence of anomalous values or extreme values or the appropriate distribution of these data is unknown. Therefore, the data need methods through which can be dealt with this problem. Two of the estimation methods have been used: the robust (estimator M) method and the nonparametric Kernel method. These estimation methods are derived to arrive at the formulas of their capabilities. A comparison of these estimations is made using the simulation method as it is implemented. Simulation experiments using different sample sizes and each experiment is repeated (1000) times to achieve the objective. The results are compared by using one of the most important statistical measures which is the mean of error squares (MSE). The best estimation method has been reached is the robust (M estimator) method. It has been shown that the estimation of the reliability function gradually decreases with time, and this is identical to the properties of this function.
Regression Discontinuity (RD) means a study that exposes a definite group to the effect of a treatment. The uniqueness of this design lies in classifying the study population into two groups based on a specific threshold limit or regression point, and this point is determined in advance according to the terms of the study and its requirements. Thus , thinking was focused on finding a solution to the issue of workers retirement and trying to propose a scenario to attract the idea of granting an end-of-service reward to fill the gap ( discontinuity point) if it had not been granted. The regression discontinuity method has been used to study and to estimate the effect of the end -service reward on the cutoff of insured workers as well as t
... Show Morethis research aims at a number of objectives including Developing the tax examination process and raise its efficiency without relying on comprehensive examination method using some statistical methods in the tax examination and Discussing the most important concepts related to the statistical methods used in the tax examination and showing its importance and how they are applied. the research represents an applied study in the General Commission of taxes. In order to achieve its objectives the research has used in the theoretical side the descriptive approach (analytical), and in the practical side Some statistical methods applied to the sample of the final accounts for the contracting company (limited) and the pharmaceutical industry (
... Show MoreGumbel distribution was dealt with great care by researchers and statisticians. There are traditional methods to estimate two parameters of Gumbel distribution known as Maximum Likelihood, the Method of Moments and recently the method of re-sampling called (Jackknife). However, these methods suffer from some mathematical difficulties in solving them analytically. Accordingly, there are other non-traditional methods, like the principle of the nearest neighbors, used in computer science especially, artificial intelligence algorithms, including the genetic algorithm, the artificial neural network algorithm, and others that may to be classified as meta-heuristic methods. Moreover, this principle of nearest neighbors has useful statistical featu
... Show MoreEstimation of the names and verbs of some letters to consider the grammatical industry
The recent development in statistics has made statistical distributions the focus of researchers in the process of compensating for some distribution parameters with fixed values and obtaining a new distribution, in this study, the distribution of Kumaraswamy was studied from the constant distributions of the two parameters. The characteristics of the distribution were discussed through the presentation of the probability density function (p.d.f), the cumulative distribution function (c.d.f.), the ratio of r, the reliability function and the hazard function. The parameters of the Kumaraswamy distribution were estimated using MLE, ME, LSEE by using the simulation method for different sampling sizes and using preli
... Show MoreIn this paper, we will study non parametric model when the response variable have missing data (non response) in observations it under missing mechanisms MCAR, then we suggest Kernel-Based Non-Parametric Single-Imputation instead of missing value and compare it with Nearest Neighbor Imputation by using the simulation about some difference models and with difference cases as the sample size, variance and rate of missing data.
This paper aims to decide the best parameter estimation methods for the parameters of the Gumbel type-I distribution under the type-II censorship scheme. For this purpose, classical and Bayesian parameter estimation procedures are considered. The maximum likelihood estimators are used for the classical parameter estimation procedure. The asymptotic distributions of these estimators are also derived. It is not possible to obtain explicit solutions of Bayesian estimators. Therefore, Markov Chain Monte Carlo, and Lindley techniques are taken into account to estimate the unknown parameters. In Bayesian analysis, it is very important to determine an appropriate combination of a prior distribution and a loss function. Therefore, two different
... Show MoreIn this study, we focused on the random coefficient estimation of the general regression and Swamy models of panel data. By using this type of data, the data give a better chance of obtaining a better method and better indicators. Entropy's methods have been used to estimate random coefficients for the general regression and Swamy of the panel data which were presented in two ways: the first represents the maximum dual Entropy and the second is general maximum Entropy in which a comparison between them have been done by using simulation to choose the optimal methods.
The results have been compared by using mean squares error and mean absolute percentage error to different cases in term of correlation valu
... Show MoreAbstract Objectives: Malocclusion was and remains one of the most common problems which affects the psyche and social status of the individual, so the estimation of the malocclusion severity and needs a percentage of orthodontic treatment of Iraqi patients is the aim of this study. Method: A randomly selected 150 pairs of study models (48 male and 102 female) were involved in this study for patients attending an orthodontic clinic at College of Dentistry/ University of Baghdad seeking for treatment. The DAI scores were collected according to WHO guidelines directly from the study model with a digital caliper, score was calculated using the regression equation of 10 occlusal traits. The dental casts were classified into four groups to determ
... Show MoreThis research aims to choose the appropriate probability distribution to the reliability analysis for an item through collected data for operating and stoppage time of the case study.
Appropriate choice for .probability distribution is when the data look to be on or close the form fitting line for probability plot and test the data for goodness of fit .
Minitab’s 17 software was used for this purpose after arranging collected data and setting it in the the program.
&nb
... Show More