Generalized multivariate transmuted Bessel distribution belongs to the family of probability distributions with a symmetric heavy tail. It is considered a mixed continuous probability distribution. It is the result of mixing the multivariate Gaussian mixture distribution with the generalized inverse normal distribution. On this basis, the paper will study a multiple compact regression model when the random error follows a generalized multivariate transmuted Bessel distribution. Assuming that the shape parameters are known, the parameters of the multiple compact regression model will be estimated using the maximum likelihood method and Bayesian approach depending on non-informative prior information. In addition, the Bayes factor was used as a criterion to test the hypotheses. A Gaussian distribution rule selects the bandwidth parameter and the kernel function based on the Gauss kernel function and quartic kernel function. It estimates the model parameters are under quadratic loss function. The researchers concluded that the posterior probability distribution of is a multivariate t distribution. Applying the findings to real data related to the jaundice percentage in the blood component as a response variable, red blood cell volume and red blood cell sedimentation as parametric influencing variables, and white and red cells as nonparametric influencing variables, the researchers concluded that when the shape parameters increase, the values of the mean square error criteria of And the variance parameter decreases.
Traumatic spinal cord injury is a serious neurological disorder. Patients experience a plethora of symptoms that can be attributed to the nerve fiber tracts that are compromised. This includes limb weakness, sensory impairment, and truncal instability, as well as a variety of autonomic abnormalities. This article will discuss how machine learning classification can be used to characterize the initial impairment and subsequent recovery of electromyography signals in an non-human primate model of traumatic spinal cord injury. The ultimate objective is to identify potential treatments for traumatic spinal cord injury. This work focuses specifically on finding a suitable classifier that differentiates between two distinct experimental
... Show MoreThe problem of Multicollinearity is one of the most common problems, which deal to a large extent with the internal correlation between explanatory variables. This problem is especially Appear in economics and applied research, The problem of Multicollinearity has a negative effect on the regression model, such as oversized variance degree and estimation of parameters that are unstable when we use the Least Square Method ( OLS), Therefore, other methods were used to estimate the parameters of the negative binomial model, including the estimated Ridge Regression Method and the Liu type estimator, The negative binomial regression model is a nonline
... Show MoreA mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the
... Show MoreA mixture model is used to model data that come from more than one component. In recent years, it became an effective tool in drawing inferences about the complex data that we might come across in real life. Moreover, it can represent a tremendous confirmatory tool in classification observations based on similarities amongst them. In this paper, several mixture regression-based methods were conducted under the assumption that the data come from a finite number of components. A comparison of these methods has been made according to their results in estimating component parameters. Also, observation membership has been inferred and assessed for these methods. The results showed that the flexible mixture model outperformed the others
... Show MoreThe using of the parametric models and the subsequent estimation methods require the presence of many of the primary conditions to be met by those models to represent the population under study adequately, these prompting researchers to search for more flexible models of parametric models and these models were nonparametric models.
In this manuscript were compared to the so-called Nadaraya-Watson estimator in two cases (use of fixed bandwidth and variable) through simulation with different models and samples sizes. Through simulation experiments and the results showed that for the first and second models preferred NW with fixed bandwidth fo
... Show MoreIn line with the advancement of hardware technology and increasing consumer demands for new functionalities and innovations, software applications grew tremendously in term of size over the last decade. This sudden increase in size has a profound impact as far as testing is concerned. Here, more and more unwanted interactions among software systems components, hardware, and operating system are to be expected, rendering increased possibility of faults. To address this issue, many useful interaction-based testing techniques (termed t-way strategies) have been developed in the literature. As an effort to promote awareness and encourage its usage, this chapter surveys the current state-of-the-art and reviews the state-of-practices in t
... Show MoreIn line with the advancement of hardware technology and increasing consumer demands for new functionalities and innovations, software applications grew tremendously in term of size over the last decade. This sudden increase in size has a profound impact as far as testing is concerned. Here, more and more unwanted interactions among software systems components, hardware, and operating system are to be expected, rendering increased possibility of faults. To address this issue, many useful interaction-based testing techniques (termed t-way strategies) have been developed in the literature. As an effort to promote awareness and encourage its usage, this chapter surveys the current state-of-the-art and reviews the state-of-practices in t
... Show MoreA non-parametric kernel method with Bootstrap technology was used to estimate the confidence intervals of the system failure function of the log-normal distribution trace data. These are the times of failure of the machines of the spinning department of the weaving company in Wasit Governorate. Estimating the failure function in a parametric way represented by the method of the maximum likelihood estimator (MLE). The comparison between the parametric and non-parametric methods was done by using the average of Squares Error (MES) criterion. It has been noted the efficiency of the nonparametric methods based on Bootstrap compared to the parametric method. It was also noted that the curve estimation is more realistic and appropriate for the re
... Show MoreTransforming the common normal distribution through the generated Kummer Beta model to the Kummer Beta Generalized Normal Distribution (KBGND) had been achieved. Then, estimating the distribution parameters and hazard function using the MLE method, and improving these estimations by employing the genetic algorithm. Simulation is used by assuming a number of models and different sample sizes. The main finding was that the common maximum likelihood (MLE) method is the best in estimating the parameters of the Kummer Beta Generalized Normal Distribution (KBGND) compared to the common maximum likelihood according to Mean Squares Error (MSE) and Mean squares Error Integral (IMSE) criteria in estimating the hazard function. While the pr
... Show MoreWe propose a novel strategy to optimize the test suite required for testing both hardware and software in a production line. Here, the strategy is based on two processes: Quality Signing Process and Quality Verification Process, respectively. Unlike earlier work, the proposed strategy is based on integration of black box and white box techniques in order to derive an optimum test suite during the Quality Signing Process. In this case, the generated optimal test suite significantly improves the Quality Verification Process. Considering both processes, the novelty of the proposed strategy is the fact that the optimization and reduction of test suite is performed by selecting only mutant killing test cases from cumulating t-way test ca
... Show More