The Estimation Of The Reliability Function Depends On The Accuracy Of The Data Used To Estimate The Parameters Of The Probability distribution, and Because Some Data Suffer from a Skew in their Data to Estimate the Parameters and Calculate the Reliability Function in light of the Presence of Some Skew in the Data, there must be a Distribution that has flexibility in dealing with that Data. As in the data of Diyala Company for Electrical Industries, as it was observed that there was a positive twisting in the data collected from the Power and Machinery Department, which required distribution that deals with those data and searches for methods that accommodate this problem and lead to accurate estimates of the reliability function,
... Show MoreThe paired sample t-test for testing the difference between two means in paired data is not robust against the violation of the normality assumption. In this paper, some alternative robust tests have been suggested by using the bootstrap method in addition to combining the bootstrap method with the W.M test. Monte Carlo simulation experiments were employed to study the performance of the test statistics of each of these three tests depending on type one error rates and the power rates of the test statistics. The three tests have been applied on different sample sizes generated from three distributions represented by Bivariate normal distribution, Bivariate contaminated normal distribution, and the Bivariate Exponential distribution.
Compaction curves are widely used in civil engineering especially for road constructions, embankments, etc. Obtaining the precise amount of Optimum Moisture Content (OMC) that gives the Maximum Dry Unit weight gdmax. is very important, where the desired soil strength can be achieved in addition to economic aspects.
In this paper, three peak functions were used to obtain the OMC and gdmax. through curve fitting for the values obtained from Standard Proctor Test. Another surface fitting was also used to model the Ohio’s compaction curves that represent the very large variation of compacted soil types.
The results showed very good correlation between the values obtained from some publ
... Show MoreMost available methods for unit hydrographs (SUH) derivation involve manual, subjective fitting of
a hydrograph through a few data points. The use of probability distributions for the derivation of synthetic
hydrographs had received much attention because of its similarity with unit hydrograph properties. In this
paper, the use of two flexible probability distributions is presented. For each distribution the unknown
parameters were derived in terms of the time to peak(tp), and the peak discharge(Qp). A simple Matlab
program is prepared for calculating these parameters and their validity was checked using comparison
with field data. Application to field data shows that the gamma and lognormal distributions had fit well.<
In this paper, we estimate the survival function for the patients of lung cancer using different nonparametric estimation methods depending on sample from complete real data which describe the duration of survivor for patients who suffer from the lung cancer based on diagnosis of disease or the enter of patients in a hospital for period of two years (starting with 2012 to the end of 2013). Comparisons between the mentioned estimation methods has been performed using statistical indicator mean squares error, concluding that the survival function for the lung cancer by using shrinkage method is the best
Big data analysis is essential for modern applications in areas such as healthcare, assistive technology, intelligent transportation, environment and climate monitoring. Traditional algorithms in data mining and machine learning do not scale well with data size. Mining and learning from big data need time and memory efficient techniques, albeit the cost of possible loss in accuracy. We have developed a data aggregation structure to summarize data with large number of instances and data generated from multiple data sources. Data are aggregated at multiple resolutions and resolution provides a trade-off between efficiency and accuracy. The structure is built once, updated incrementally, and serves as a common data input for multiple mining an
... Show MoreTransforming the common normal distribution through the generated Kummer Beta model to the Kummer Beta Generalized Normal Distribution (KBGND) had been achieved. Then, estimating the distribution parameters and hazard function using the MLE method, and improving these estimations by employing the genetic algorithm. Simulation is used by assuming a number of models and different sample sizes. The main finding was that the common maximum likelihood (MLE) method is the best in estimating the parameters of the Kummer Beta Generalized Normal Distribution (KBGND) compared to the common maximum likelihood according to Mean Squares Error (MSE) and Mean squares Error Integral (IMSE) criteria in estimating the hazard function. While the pr
... Show MoreIn this research , we study the inverse Gompertz distribution (IG) and estimate the survival function of the distribution , and the survival function was evaluated using three methods (the Maximum likelihood, least squares, and percentiles estimators) and choosing the best method estimation ,as it was found that the best method for estimating the survival function is the squares-least method because it has the lowest IMSE and for all sample sizes
In this research , we study the inverse Gompertz distribution (IG) and estimate the survival function of the distribution , and the survival function was evaluated using three methods (the Maximum likelihood, least squares, and percentiles estimators) and choosing the best method estimation ,as it was found that the best method for estimating the survival function is the squares-least method because it has the lowest IMSE and for all sample sizes