In this research , we study the inverse Gompertz distribution (IG) and estimate the survival function of the distribution , and the survival function was evaluated using three methods (the Maximum likelihood, least squares, and percentiles estimators) and choosing the best method estimation ,as it was found that the best method for estimating the survival function is the squares-least method because it has the lowest IMSE and for all sample sizes
Abstract:
Interest in the topic of prediction has increased in recent years and appeared modern methods such as Artificial Neural Networks models, if these methods are able to learn and adapt self with any model, and does not require assumptions on the nature of the time series. On the other hand, the methods currently used to predict the classic method such as Box-Jenkins may be difficult to diagnose chain and modeling because they assume strict conditions.
... Show More
In recent years, predicting heart disease has become one of the most demanding tasks in medicine. In modern times, one person dies from heart disease every minute. Within the field of healthcare, data science is critical for analyzing large amounts of data. Because predicting heart disease is such a difficult task, it is necessary to automate the process in order to prevent the dangers connected with it and to assist health professionals in accurately and rapidly diagnosing heart disease. In this article, an efficient machine learning-based diagnosis system has been developed for the diagnosis of heart disease. The system is designed using machine learning classifiers such as Support Vector Machine (SVM), Nave Bayes (NB), and K-Ne
... Show MoreA mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the others
... Show MoreA non-polynomial spline (NPS) is an approximation method that relies on the triangular and polynomial parts, so the method has infinite derivatives of the triangular part of the NPS to compensate for the loss of smoothness inherited by the polynomial. In this paper, we propose polynomial-free linear and quadratic spline types to solve fuzzy Volterra integral equations (FVIE) of the 2nd kind with the weakly singular kernel (FVIEWSK) and Abel's type kernel. The linear type algorithm gives four parameters to form a linear spline. In comparison, the quadratic type algorithm gives five parameters to create a quadratic spline, which is more of a credit for the exact solution. These algorithms process kernel singularities with a simple techniqu
... Show MoreDegenerate parabolic partial differential equations (PDEs) with vanishing or unbounded leading coefficient make the PDE non-uniformly parabolic, and new theories need to be developed in the context of practical applications of such rather unstudied mathematical models arising in porous media, population dynamics, financial mathematics, etc. With this new challenge in mind, this paper considers investigating newly formulated direct and inverse problems associated with non-uniform parabolic PDEs where the leading space- and time-dependent coefficient is allowed to vanish on a non-empty, but zero measure, kernel set. In the context of inverse analysis, we consider the linear but ill-pose
In this paper, we present multiple bit error correction coding scheme based on extended Hamming product code combined with type II HARQ using shared resources for on chip interconnect. The shared resources reduce the hardware complexity of the encoder and decoder compared to the existing three stages iterative decoding method for on chip interconnects. The proposed method of decoding achieves 20% and 28% reduction in area and power consumption respectively, with only small increase in decoder delay compared to the existing three stage iterative decoding scheme for multiple bit error correction. The proposed code also achieves excellent improvement in residual flit error rate and up to 58% of total power consumption compared to the other err
... Show MoreAn update of our research is the first to develop and reform the agricultural sector . and promoting production and productivity of this sector multi-sources , which is the management and beekeeping one source . Been applied to the style of beekeeping mobile promiscuous includes twentieth cell in the Iraqe project of mussiab . in which there exist a variety of crops and trees .
Experiment had proved successful and led to raise the level of npoduction of single Dell of the honey to 49 kg over the previous year and surpassed the average production percell in the province of Babylon , which the amount of 13.945 kg , another
... Show MoreNanosilica was extracted from rice husk, which was locally collected from the Iraqi mill at Al-Mishikhab district in Najaf Governorate, Iraq. The precipitation method was used to prepared Nanosilica powder from rice husk ash, after treating it thermally at 700°C, followed by dissolving the silica in the alkaline solution and getting a sodium silicate solution. Two samples of the final solution were collected to study the effect of filtration on the purity of the sample by X-ray fluorescence spectrometry (XRF). The result shows that the filtered samples have purity above while the non-filtered sample purity was around The structure analysis investigated by the X-ray diffraction (XRD), found that the Nanosilica powder has an amorphous
... Show MoreIn this paper, we derived an estimator of reliability function for Laplace distribution with two parameters using Bayes method with square error loss function, Jeffery’s formula and conditional probability random variable of observation. The main objective of this study is to find the efficiency of the derived Bayesian estimator compared to the maximum likelihood of this function and moment method using simulation technique by Monte Carlo method under different Laplace distribution parameters and sample sizes. The consequences have shown that Bayes estimator has been more efficient than the maximum likelihood estimator and moment estimator in all samples sizes