A mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the others in most simulation scenarios according to the integrated mean square error and integrated classification error
The theory of probabilistic programming may be conceived in several different ways. As a method of programming it analyses the implications of probabilistic variations in the parameter space of linear or nonlinear programming model. The generating mechanism of such probabilistic variations in the economic models may be due to incomplete information about changes in demand, production and technology, specification errors about the econometric relations presumed for different economic agents, uncertainty of various sorts and the consequences of imperfect aggregation or disaggregating of economic variables. In this Research we discuss the probabilistic programming problem when the coefficient bi is random variable
... Show MoreThis paper deals with modelling and control of Euler-Bernoulli smart beam interacting with a fluid medium. Several distributed piezo-patches (actuators and/or sensors) are bonded on the surface of the target beam. To model the vibrating beam properly, the effect of the piezo-patches and the hydrodynamic loads should be taken into account carefully. The partial differential equation PDE for the target oscillating beam is derived considering the piezo-actuators as input controls. Fluid forces are decomposed into two components: 1) hydrodynamic forces due to the beam oscillations, and 2) external (disturbance) hydrodynamic loads independent of beam motion. Then the PDE is discretized usi
A linear engine generator with a compact double-acting free piston mechanism allows for full integration of the combustion engine and generator, which provides an alternative chemical-to-electrical energy converter with a higher volumetric power density for the electrification of automobiles, trains, and ships. This paper aims to analyse the performance of the integrated engine with alternative permanent magnet linear tubular electrical machine topologies using a coupled dynamic model in Siemens Simcenter software. Two types of alternative generator configurations are compared, namely long translator-short stator and short translator-long stator linear machines. The dynamic models of the linear engine and linear generator, validated
... Show MoreThe transfer function model the basic concepts in the time series. This model is used in the case of multivariate time series. As for the design of this model, it depends on the available data in the time series and other information in the series so when the representation of the transfer function model depends on the representation of the data In this research, the transfer function has been estimated using the style nonparametric represented in two method local linear regression and cubic smoothing spline method The method of semi-parametric represented use semiparametric single index model, With four proposals, , That the goal of this research is comparing the capabilities of the above mentioned m
... Show MoreIn this research we assumed that the number of emissions by time (𝑡) of radiation particles is distributed poisson distribution with parameter (𝑡), where < 0 is the intensity of radiation. We conclude that the time of the first emission is distributed exponentially with parameter 𝜃, while the time of the k-th emission (𝑘 = 2,3,4, … . . ) is gamma distributed with parameters (𝑘, 𝜃), we used a real data to show that the Bayes estimator 𝜃 ∗ for 𝜃 is more efficient than 𝜃̂, the maximum likelihood estimator for 𝜃 by using the derived variances of both estimators as a statistical indicator for efficiency
The research presents the reliability. It is defined as the probability of accomplishing any part of the system within a specified time and under the same circumstances. On the theoretical side, the reliability, the reliability function, and the cumulative function of failure are studied within the one-parameter Raleigh distribution. This research aims to discover many factors that are missed the reliability evaluation which causes constant interruptions of the machines in addition to the problems of data. The problem of the research is that there are many methods for estimating the reliability function but no one has suitable qualifications for most of these methods in the data such
This paper deals with defining Burr-XII, and how to obtain its p.d.f., and CDF, since this distribution is one of failure distribution which is compound distribution from two failure models which are Gamma model and weibull model. Some equipment may have many important parts and the probability distributions representing which may be of different types, so found that Burr by its different compound formulas is the best model to be studied, and estimated its parameter to compute the mean time to failure rate. Here Burr-XII rather than other models is consider because it is used to model a wide variety of phenomena including crop prices, household income, option market price distributions, risk and travel time. It has two shape-parame
... Show MoreThe logistic regression model is one of the oldest and most common of the regression models, and it is known as one of the statistical methods used to describe and estimate the relationship between a dependent random variable and explanatory random variables. Several methods are used to estimate this model, including the bootstrap method, which is one of the estimation methods that depend on the principle of sampling with return, and is represented by a sample reshaping that includes (n) of the elements drawn by randomly returning from (N) from the original data, It is a computational method used to determine the measure of accuracy to estimate the statistics, and for this reason, this method was used to find more accurate estimates. The ma
... Show More