A mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the others in most simulation scenarios according to the integrated mean square error and integrated classification error
There is an assumption implicit but fundamental theory behind the decline by the time series used in the estimate, namely that the time series has a sleep feature Stationary or the language of Engle Gernger chains are integrated level zero, which indicated by I (0). It is well known, for example, tables of t-statistic is designed primarily to deal with the results of the regression that uses static strings. This assumption has been previously treated as an axiom the mid-seventies, where researchers are conducting studies of applied without taking into account the properties of time series used prior to the assessment, was to accept the results of these tests Bmanueh and delivery capabilities based on the applicability of the theo
... Show MoreEncryption of data is translating data to another shape or symbol which enables people only with an access to the secret key or a password that can read it. The data which are encrypted are generally referred to as cipher text, while data which are unencrypted are known plain text. Entropy can be used as a measure which gives the number of bits that are needed for coding the data of an image. As the values of pixel within an image are dispensed through further gray-levels, the entropy increases. The aim of this research is to compare between CAST-128 with proposed adaptive key and RSA encryption methods for video frames to determine the more accurate method with highest entropy. The first method is achieved by applying the "CAST-128" and
... Show MoreIn this paper, the deterministic and the stochastic models are proposed to study the interaction of the Coronavirus (COVID-19) with host cells inside the human body. In the deterministic model, the value of the basic reproduction number determines the persistence or extinction of the COVID-19. If , one infected cell will transmit the virus to less than one cell, as a result, the person carrying the Coronavirus will get rid of the disease .If the infected cell will be able to infect all cells that contain ACE receptors. The stochastic model proves that if are sufficiently large then maybe give us ultimate disease extinction although , and this facts also proved by computer simulation.
In recent decades, tremendous success has been achieved in the advancement of chemical admixtures for Portland cement concrete. Most efforts have concentrated on improving the properties of concrete and studying the factors that influence on these properties. Since the compressive strength is considered a valuable property and is invariably a vital element of the structural design, especially high early strength development which can be provide more benefits in concrete production, such as reducing construction time and labor and saving the formwork and energy. As a matter of fact, it is influenced as a most properties of concrete by several factors including water-cement ratio, cement type and curing methods employed.
Because of acce
The analysis of the classic principal components are sensitive to the outliers where they are calculated from the characteristic values and characteristic vectors of correlation matrix or variance Non-Robust, which yields an incorrect results in the case of these data contains the outliers values. In order to treat this problem, we resort to use the robust methods where there are many robust methods Will be touched to some of them.
The robust measurement estimators include the measurement of direct robust estimators for characteristic values by using characteristic vectors without relying on robust estimators for the variance and covariance matrices. Also the analysis of the princ
... Show MoreAs the process of estimate for model and variable selection significant is a crucial process in the semi-parametric modeling At the beginning of the modeling process often At there are many explanatory variables to Avoid the loss of any explanatory elements may be important as a result , the selection of significant variables become necessary , so the process of variable selection is not intended to simplifying model complexity explanation , and also predicting. In this research was to use some of the semi-parametric methods (LASSO-MAVE , MAVE and The proposal method (Adaptive LASSO-MAVE) for variable selection and estimate semi-parametric single index model (SSIM) at the same time .
... Show MoreThe effect of micro-and nano silica particles (silica SiO2 (100 μm), Fused silica (12nm)) on some mechanical properties of epoxy resin was investigated (Young's modulus, Flexural strength). The micro-and nano composites were prepared by using three steps process with different volume fraction of micro-and nano particles (1, 2, 3, 4, 5, 7, 10, 15, and 20 vol. %). Flexural strength and Young's modulus of nano composites were increased at low volume fraction (max. enhancement at 4 vol.% ). However at higher volume fraction both Young's modulus and flexural strength decrease. Moreover, above, the mechanical properties are enhanced more than that of neat epoxy resin. The flexural strength decreases with increasing the volume fraction of micr
... Show MoreResearchers need to understand the differences between parametric and nonparametric regression models and how they work with available information about the relationship between response and explanatory variables and the distribution of random errors. This paper proposes a new nonparametric regression function for the kernel and employs it with the Nadaraya-Watson kernel estimator method and the Gaussian kernel function. The proposed kernel function (AMS) is then compared to the Gaussian kernel and the traditional parametric method, the ordinary least squares method (OLS). The objective of this study is to examine the effectiveness of nonparametric regression and identify the best-performing model when employing the Nadaraya-Watson
... Show More