In this article, performing and deriving te probability density function for Rayleigh distribution is done by using ordinary least squares estimator method and Rank set estimator method. Then creating interval for scale parameter of Rayleigh distribution. Anew method using is used for fuzzy scale parameter. After that creating the survival and hazard functions for two ranking functions are conducted to show which one is beast.
The way used to estimate the fuzzy reliability differs according to the nature of the information of failure time which has been dealt in this research.The information of failure times has no probable distribution to explain it , in addition it has fuzzy quality.The research includes fuzzy reliability estimation of three periods ,the first one from 1986 to 2013,the second one from 2013 to 2033 while the third one from 2033 to 2066 .Four failure time have been chosen to identify the membership function of fuzzy trapezoid represented in the pervious years after taking in consideration the estimation of most researchers, proffional geologists and the technician who is incharge of maintaining of Mosul Dam project. B
... Show MoreStatistical methods and statistical decisions making were used to arrange and analyze the primary data to get norms which are used with Geographic Information Systems (GIS) and spatial analysis programs to identify the animals production and poultry units in strategic nutrition channels, also the priorities of food insecurity through the local production and import when there is no capacity for production. The poultry production is one of the most important commodities that satisfy human body protein requirements, also the most important criteria to measure the development and prosperity of nations. The poultry fields of Babylon Governorate are located in Abi Ghareg and Al_Kifil centers according to many criteria or factors such as the popu
... Show MoreFerritin is a key organizer of protected deregulation, particularly below risky hyperferritinemia, by straight immune-suppressive and pro-inflammatory things. , We conclude that there is a significant association between levels of ferritin and the harshness of COVID-19. In this paper we introduce a semi- parametric method for prediction by making a combination between NN and regression models. So, two methodologies are adopted, Neural Network (NN) and regression model in design the model; the data were collected from مستشفى دار التمريض الخاص for period 11/7/2021- 23/7/2021, we have 100 person, With COVID 12 Female & 38 Male out of 50, while 26 Female & 24 Male non COVID out of 50. The input variables of the NN m
... Show MoreIn real situations all observations and measurements are not exact numbers but more or less non-exact, also called fuzzy. So, in this paper, we use approximate non-Bayesian computational methods to estimate inverse Weibull parameters and reliability function with fuzzy data. The maximum likelihood and moment estimations are obtained as non-Bayesian estimation. The maximum likelihood estimators have been derived numerically based on two iterative techniques namely “Newton-Raphson†and the “Expectation-Maximization†techniques. In addition, we provide compared numerically through Monte-Carlo simulation study to obtained estimates of the parameters and reliability function i
... Show MoreThe Estimation Of The Reliability Function Depends On The Accuracy Of The Data Used To Estimate The Parameters Of The Probability distribution, and Because Some Data Suffer from a Skew in their Data to Estimate the Parameters and Calculate the Reliability Function in light of the Presence of Some Skew in the Data, there must be a Distribution that has flexibility in dealing with that Data. As in the data of Diyala Company for Electrical Industries, as it was observed that there was a positive twisting in the data collected from the Power and Machinery Department, which required distribution that deals with those data and searches for methods that accommodate this problem and lead to accurate estimates of the reliability function,
... Show MoreAbstract
In this study, we compare between the autoregressive approximations (Yule-Walker equations, Least Squares , Least Squares ( forward- backword ) and Burg’s (Geometric and Harmonic ) methods, to determine the optimal approximation to the time series generated from the first - order moving Average non-invertible process, and fractionally - integrated noise process, with several values for d (d=0.15,0.25,0.35,0.45) for different sample sizes (small,median,large)for two processes . We depend on figure of merit function which proposed by author Shibata in 1980, to determine the theoretical optimal order according to min
... Show MoreIn this article, we developed a new loss function, as the simplification of linear exponential loss function (LINEX) by weighting LINEX function. We derive a scale parameter, reliability and the hazard functions in accordance with upper record values of the Lomax distribution (LD). To study a small sample behavior performance of the proposed loss function using a Monte Carlo simulation, we make a comparison among maximum likelihood estimator, Bayesian estimator by means of LINEX loss function and Bayesian estimator using square error loss (SE) function. The consequences have shown that a modified method is the finest for valuing a scale parameter, reliability and hazard functions.
In his opus, Ted Hughes has annexed new and fresh territories of signification to the very notion of the literary animal. Building on the earlier modernist example of the Lawrencian legacy that dwells upon the question of animalism, Hughes seems to have stepped further into the terrain of the sheer struggle when, in his hands, the grotesquerie of survival and violence energizes the topos of the literary animal in his postmodern bestiary. In Hughes’s elemental poetic process this grotesquerie and violence stages the literary animal as a vital poetic device or motif that is finally restored to the primitive power of poetry. In his “Thrushes”, he thus defamiliarizes these tiny creatures’ acts of being to bring upfront into focus thi
... Show MoreFuzzy C-means (FCM) is a clustering method used for collecting similar data elements within the group according to specific measurements. Tabu is a heuristic algorithm. In this paper, Probabilistic Tabu Search for FCM implemented to find a global clustering based on the minimum value of the Fuzzy objective function. The experiments designed for different networks, and cluster’s number the results show the best performance based on the comparison that is done between the values of the objective function in the case of using standard FCM and Tabu-FCM, for the average of ten runs.
The main aim of this paper is to study how the different estimators of the two unknown parameters (shape and scale parameter) of a generalized exponential distribution behave for different sample sizes and for different parameter values. In particular,
. Maximum Likelihood, Percentile and Ordinary Least Square estimators had been implemented for different sample sizes (small, medium, and large) and assumed several contrasts initial values for the two parameters. Two indicators of performance Mean Square Error and Mean Percentile Error were used and the comparisons were carried out between different methods of estimation by using monte carlo simulation technique .. It was obse
... Show More