In this research, the semiparametric Bayesian method is compared with the classical method to estimate reliability function of three systems : k-out of-n system, series system, and parallel system. Each system consists of three components, the first one represents the composite parametric in which failure times distributed as exponential, whereas the second and the third components are nonparametric ones in which reliability estimations depend on Kernel method using two methods to estimate bandwidth parameter h method and Kaplan-Meier method. To indicate a better method for system reliability function estimation, it has been used simulation procedure for comparison and different sample sizes of size (14,30,60 and 100) using standard comparison Integral Mean Square Error (IMSE). For k-out of-n system, the results indicate that it is better to use Bayesian method for samples of size (30,60 and 100), and to use the classical method for samples of size (14), whereas for series system the best method to use is Bayesian method for samples of size (14,60 and 100) , and to use the classical method for sample of size (30). for parallel system, it is better to use Bayesian method for all sample sizes.
The using of the parametric models and the subsequent estimation methods require the presence of many of the primary conditions to be met by those models to represent the population under study adequately, these prompting researchers to search for more flexible models of parametric models and these models were nonparametric models.
In this manuscript were compared to the so-called Nadaraya-Watson estimator in two cases (use of fixed bandwidth and variable) through simulation with different models and samples sizes. Through simulation experiments and the results showed that for the first and second models preferred NW with fixed bandwidth fo
... Show MoreThe aim of this research is to use robust technique by trimming, as the analysis of maximum likelihood (ML) often fails in the case of outliers in the studied phenomenon. Where the (MLE) will lose its advantages because of the bad influence caused by the Outliers. In order to address this problem, new statistical methods have been developed so as not to be affected by the outliers. These methods have robustness or resistance. Therefore, maximum trimmed likelihood: (MTL) is a good alternative to achieve more results. Acceptability and analogies, but weights can be used to increase the efficiency of the resulting capacities and to increase the strength of the estimate using the maximum weighted trimmed likelihood (MWTL). In order to perform t
... Show MoreModern French novel has gained a distinctive status in the history of French literature during the first half of the twentieth century. This is due to many factors including the new literary descriptive objective style adopted by novelists like Alain Robbe – Grillet that has long been regarded as the outstanding writer of the nouveau roman, as well as its major spokesman, a representative writer and a leading theoretician of the new novel that has broken the classical rules of the one hero and evolved, through questioning the relationship of man and the world and reevaluating the limits of contemporary fiction , into creating a new form of narrative.
Résumé:
En vue de résu
... Show MoreAnalysis the economic and financial phenomena and other requires to build the appropriate model, which represents the causal relations between factors. The operation building of the model depends on Imaging conditions and factors surrounding an in mathematical formula and the Researchers target to build that formula appropriately. Classical linear regression models are an important statistical tool, but used in a limited way, where is assumed that the relationship between the variables illustrations and response variables identifiable. To expand the representation of relationships between variables that represent the phenomenon under discussion we used Varying Coefficient Models
... Show MoreThe Estimation Of The Reliability Function Depends On The Accuracy Of The Data Used To Estimate The Parameters Of The Probability distribution, and Because Some Data Suffer from a Skew in their Data to Estimate the Parameters and Calculate the Reliability Function in light of the Presence of Some Skew in the Data, there must be a Distribution that has flexibility in dealing with that Data. As in the data of Diyala Company for Electrical Industries, as it was observed that there was a positive twisting in the data collected from the Power and Machinery Department, which required distribution that deals with those data and searches for methods that accommodate this problem and lead to accurate estimates of the reliability function,
... Show MoreIn this research، a comparison has been made between the robust estimators of (M) for the Cubic Smoothing Splines technique، to avoid the problem of abnormality in data or contamination of error، and the traditional estimation method of Cubic Smoothing Splines technique by using two criteria of differentiation which are (MADE، WASE) for different sample sizes and disparity levels to estimate the chronologically different coefficients functions for the balanced longitudinal data which are characterized by observations obtained through (n) from the independent subjects، each one of them is measured repeatedly by group of specific time points (m)،since the frequent measurements within the subjects are almost connected an
... Show MoreAn Indirect simple sensitive and applicable spectrofluorometric method has been developed for the determination of Cefotaxime Sodium (CEF), ciprofloxacin Hydrochloride (CIP) and Famotidine (FAM) using reaction system bromate-bromide and acriflavine (AF) as fluorescent dye. The method is based on the oxidation of drugs with known excess bromate-bromide mixture in acidic medium and subsequent determination of unreacted oxidant by quenching fluorescence of AF. Fluorescence intensity of residual AF was measured at 528 nm after excitation at 402 nm. The fluorescence-concentration plots were rectilinear over the ranges 0.1-3.0, 0.05-2.6 and 0.1-3.8 µg ml-1 with lower detection limits of 0.013, 0.018 and 0.021 µg ml-1 an
... Show MoreA mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the others
... Show MoreThe logistic regression model regarded as the important regression Models ,where of the most interesting subjects in recent studies due to taking character more advanced in the process of statistical analysis .
The ordinary estimating methods is failed in dealing with data that consist of the presence of outlier values and hence on the absence of such that have undesirable effect on the result. &nbs
... Show MoreIn this paper an estimator of reliability function for the pareto dist. Of the first kind has been derived and then a simulation approach by Monte-Calro method was made to compare the Bayers estimator of reliability function and the maximum likelihood estimator for this function. It has been found that the Bayes. estimator was better than maximum likelihood estimator for all sample sizes using Integral mean square error(IMSE).