A mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the others in most simulation scenarios according to the integrated mean square error and integrated classification error
Semiparametric methods combined parametric methods and nonparametric methods ,it is important in most of studies which take in it's nature more progress in the procedure of accurate statistical analysis which aim getting estimators efficient, the partial linear regression model is considered the most popular type of semiparametric models, which consisted of parametric component and nonparametric component in order to estimate the parametric component that have certain properties depend on the assumptions concerning the parametric component, where the absence of assumptions, parametric component will have several problems for example multicollinearity means (explanatory variables are interrelated to each other) , To treat this problem we use
... Show MoreModern asphalt technology has adopted nanomaterials as an alternative option to assert that asphalt pavement can survive harsh climates and repeated heavy axle loading during service life and prolong pavement life. This work aims to elucidate the behavior of the modified asphalt mixture fracture model and assess the fatigue and Rutting performance of Hot Mix Asphalt (HMA) mixes using the outcomes of indirect Tensile Strength (IDT), Semicircular bend (SCB) and rutting resistance; for this, a single PG (64−16) nanomodified asphalt binder with 5 % SiO2 and TiO2 have been investigated through a series of laboratory tests, including: Resilient modulus, Creep compliance, and tensile strength, SCB, and Flow Number (FN) to study their potential
... Show MoreRefractive indices (nD), viscosities (η) and densities (r) were deliberated for the binary mixtures created by dipropyl amine with 1-octanol, 1-heptanol, 1-hexanol, 1-pentanol and tert-pentyl alcohol at temperature 298.15 K over the perfect installation extent. The function of Redlich-Kister were used to calculate and renovated of the refractive index deviations (∆nD), viscosity deviations (ηE), excess molar Gibbs free energy (∆G*E) and excess molar volumes(Vm E). The standard errors and coefficients were respected by this function. The values of ∆nD, ηE, Vm E and ∆G*E were plotted against mole fraction of dipropyl amine. In all cases the obtained ηE, ∆G*E, Vm E and ∆nD values were negative at 298.15K. Effect of carbon atoms
... Show MoreRefractive indices (nD), viscosities (η) and densities (ρ) were deliberated for the binary mixtures created by dipropyl amine with 1-octanol, 1-heptanol, 1-hexanol, 1-pentanol and tert-pentyl alcohol at temperature 298.15 K over the perfect installation extent. The function of Redlich-Kister were used to calculate and renovated of the refractive index deviations (∆nD), viscosity deviations (ηE), excess molar Gibbs free energy (∆G*E) and excess molar volumes (VmE) The standard errors and coefficients were respected by this function. The values of ∆nD, ηE, VmE and ∆G*E were plotted against mole fraction of dipropyl amine. In all cases the obtained ηE, ∆G*E, VmE and ∆nD values were negative at 298.15K. Effect of carbo
... Show MoreThis paper shews how to estimate the parameter of generalized exponential Rayleigh (GER) distribution by three estimation methods. The first one is maximum likelihood estimator method the second one is moment employing estimation method (MEM), the third one is rank set sampling estimator method (RSSEM)The simulation technique is used for all these estimation methods to find the parameters for generalized exponential Rayleigh distribution. Finally using the mean squares error criterion to compare between these estimation methods to find which of these methods are best to the others
In this paper, some commonly used hierarchical cluster techniques have been compared. A comparison was made between the agglomerative hierarchical clustering technique and the k-means technique, which includes the k-mean technique, the variant K-means technique, and the bisecting K-means, although the hierarchical cluster technique is considered to be one of the best clustering methods. It has a limited usage due to the time complexity. The results, which are calculated based on the analysis of the characteristics of the cluster algorithms and the nature of the data, showed that the bisecting K-means technique is the best compared to the rest of the other methods used.
Canonical correlation analysis is one of the common methods for analyzing data and know the relationship between two sets of variables under study, as it depends on the process of analyzing the variance matrix or the correlation matrix. Researchers resort to the use of many methods to estimate canonical correlation (CC); some are biased for outliers, and others are resistant to those values; in addition, there are standards that check the efficiency of estimation methods.
In our research, we dealt with robust estimation methods that depend on the correlation matrix in the analysis process to obtain a robust canonical correlation coefficient, which is the method of Biwe
... Show MoreLinear programming currently occupies a prominent position in various fields and has wide applications, as its importance lies in being a means of studying the behavior of a large number of systems as well. It is also the simplest and easiest type of models that can be created to address industrial, commercial, military and other dilemmas. Through which to obtain the optimal quantitative value. In this research, we dealt with the post optimality solution, or what is known as sensitivity analysis, using the principle of shadow prices. The scientific solution to any problem is not a complete solution once the optimal solution is reached. Any change in the values of the model constants or what is known as the inputs of the model that will chan
... Show MoreThe aim of this paper is to estimate a nonlinear regression function of the Export of the crude oil Saudi (in Million Barrels) as a function of the number of discovered fields.
Through studying the behavior of the data we show that its behavior was not followed a linear pattern or can put it in a known form so far there was no possibility to see a general trend resulting from such exports.
We use different nonlinear estimators to estimate a regression function, Local linear estimator, Semi-parametric as well as an artificial neural network estimator (ANN).
The results proved that the (ANN) estimator is the best nonlinear estimator am
... Show More
In this work, a novel technique to obtain an accurate solutions to nonlinear form by multi-step combination with Laplace-variational approach (MSLVIM) is introduced. Compared with the traditional approach for variational it overcome all difficulties and enable to provide us more an accurate solutions with extended of the convergence region as well as covering to larger intervals which providing us a continuous representation of approximate analytic solution and it give more better information of the solution over the whole time interval. This technique is more easier for obtaining the general Lagrange multiplier with reduces the time and calculations. It converges rapidly to exact formula with simply computable terms wit
... Show More