In many scientific fields, Bayesian models are commonly used in recent research. This research presents a new Bayesian model for estimating parameters and forecasting using the Gibbs sampler algorithm. Posterior distributions are generated using the inverse gamma distribution and the multivariate normal distribution as prior distributions. The new method was used to investigate and summaries Bayesian statistics' posterior distribution. The theory and derivation of the posterior distribution are explained in detail in this paper. The proposed approach is applied to three simulation datasets of 100, 300, and 500 sample sizes. Also, the procedure was extended to the real dataset called the rock intensity dataset. The actual dataset is collected from the UCI Machine Learning Repository. The findings were discussed and summarized at the end. All calculations for this research have been done using R software (version 4.2.2). © 2024 Author(s).
Variable selection is an essential and necessary task in the statistical modeling field. Several studies have triedto develop and standardize the process of variable selection, but it isdifficultto do so. The first question a researcher needs to ask himself/herself what are the most significant variables that should be used to describe a given dataset’s response. In thispaper, a new method for variable selection using Gibbs sampler techniqueshas beendeveloped.First, the model is defined, and the posterior distributions for all the parameters are derived.The new variable selection methodis tested usingfour simulation datasets. The new approachiscompared with some existingtechniques: Ordinary Least Squared (OLS), Least Absolute Shrinkage
... Show MoreThis paper deals to how to estimate points non measured spatial data when the number of its terms (sample spatial) a few, that are not preferred for the estimation process, because we also know that whenever if the data is large, the estimation results of the points non measured to be better and thus the variance estimate less, so the idea of this paper is how to take advantage of the data other secondary (auxiliary), which have a strong correlation with the primary data (basic) to be estimated single points of non-measured, as well as measuring the variance estimate, has been the use of technique Co-kriging in this field to build predictions spatial estimation process, and then we applied this idea to real data in th
... Show MoreInferential methods of statistical distributions have reached a high level of interest in recent years. However, in real life, data can follow more than one distribution, and then mixture models must be fitted to such data. One of which is a finite mixture of Rayleigh distribution that is widely used in modelling lifetime data in many fields, such as medicine, agriculture and engineering. In this paper, we proposed a new Bayesian frameworks by assuming conjugate priors for the square of the component parameters. We used this prior distribution in the classical Bayesian, Metropolis-hasting (MH) and Gibbs sampler methods. The performance of these techniques were assessed by conducting data which was generated from two and three-component mixt
... Show MoreThe use of Bayesian approach has the promise of features indicative of regression analysis model classification tree to take advantage of the above information by, and ensemble trees for explanatory variables are all together and at every stage on the other. In addition to obtaining the subsequent information at each node in the construction of these classification tree. Although bayesian estimates is generally accurate, but it seems that the logistic model is still a good competitor in the field of binary responses through its flexibility and mathematical representation. So is the use of three research methods data processing is carried out, namely: logistic model, and model classification regression tree, and bayesian regression tree mode
... Show MoreThe non static chain is always the problem of static analysis so that explained some of theoretical work, the properties of statistical regression analysis to lose when using strings in statistic and gives the slope of an imaginary relation under consideration. chain is not static can become static by adding variable time to the multivariate analysis the factors to remove the general trend as well as variable placebo seasons to remove the effect of seasonal .convert the data to form exponential or logarithmic , in addition to using the difference repeated d is said in this case it integrated class d. Where the research contained in the theoretical side in parts in the first part the research methodology ha
... Show MoreIn this paper we used frequentist and Bayesian approaches for the linear regression model to predict future observations for unemployment rates in Iraq. Parameters are estimated using the ordinary least squares method and for the Bayesian approach using the Markov Chain Monte Carlo (MCMC) method. Calculations are done using the R program. The analysis showed that the linear regression model using the Bayesian approach is better and can be used as an alternative to the frequentist approach. Two criteria, the root mean square error (RMSE) and the median absolute deviation (MAD) were used to compare the performance of the estimates. The results obtained showed that the unemployment rates will continue to increase in the next two decade
... Show MoreThis paper presents a grey model GM(1,1) of the first rank and a variable one and is the basis of the grey system theory , This research dealt properties of grey model and a set of methods to estimate parameters of the grey model GM(1,1) is the least square Method (LS) , weighted least square method (WLS), total least square method (TLS) and gradient descent method (DS). These methods were compared based on two types of standards: Mean square error (MSE), mean absolute percentage error (MAPE), and after comparison using simulation the best method was applied to real data represented by the rate of consumption of the two types of oils a Heavy fuel (HFO) and diesel fuel (D.O) and has been applied several tests to
... Show MoreSurvival analysis is one of the types of data analysis that describes the time period until the occurrence of an event of interest such as death or other events of importance in determining what will happen to the phenomenon studied. There may be more than one endpoint for the event, in which case it is called Competing risks. The purpose of this research is to apply the dynamic approach in the analysis of discrete survival time in order to estimate the effect of covariates over time, as well as modeling the nonlinear relationship between the covariates and the discrete hazard function through the use of the multinomial logistic model and the multivariate Cox model. For the purpose of conducting the estimation process for both the discrete
... Show MoreThis research a study model of linear regression problem of autocorrelation of random error is spread when a normal distribution as used in linear regression analysis for relationship between variables and through this relationship can predict the value of a variable with the values of other variables, and was comparing methods (method of least squares, method of the average un-weighted, Thiel method and Laplace method) using the mean square error (MSE) boxes and simulation and the study included fore sizes of samples (15, 30, 60, 100). The results showed that the least-squares method is best, applying the fore methods of buckwheat production data and the cultivated area of the provinces of Iraq for years (2010), (2011), (2012),
... Show More