In this research , we study the inverse Gompertz distribution (IG) and estimate the survival function of the distribution , and the survival function was evaluated using three methods (the Maximum likelihood, least squares, and percentiles estimators) and choosing the best method estimation ,as it was found that the best method for estimating the survival function is the squares-least method because it has the lowest IMSE and for all sample sizes
In data transmission a change in single bit in the received data may lead to miss understanding or a disaster. Each bit in the sent information has high priority especially with information such as the address of the receiver. The importance of error detection with each single change is a key issue in data transmission field.
The ordinary single parity detection method can detect odd number of errors efficiently, but fails with even number of errors. Other detection methods such as two-dimensional and checksum showed better results and failed to cope with the increasing number of errors.
Two novel methods were suggested to detect the binary bit change errors when transmitting data in a noisy media.Those methods were: 2D-Checksum me
In this work, the modified Lyapunov-Schmidt reduction is used to find a nonlinear Ritz approximation of Fredholm functional defined by the nonhomogeneous Camassa-Holm equation and Benjamin-Bona-Mahony. We introduced the modified Lyapunov-Schmidt reduction for nonhomogeneous problems when the dimension of the null space is equal to two. The nonlinear Ritz approximation for the nonhomogeneous Camassa-Holm equation has been found as a function of codimension twenty-four.
Conditional logistic regression is often used to study the relationship between event outcomes and specific prognostic factors in order to application of logistic regression and utilizing its predictive capabilities into environmental studies. This research seeks to demonstrate a novel approach of implementing conditional logistic regression in environmental research through inference methods predicated on longitudinal data. Thus, statistical analysis of longitudinal data requires methods that can properly take into account the interdependence within-subjects for the response measurements. If this correlation ignored then inferences such as statistical tests and confidence intervals can be invalid largely.
The Assignment model is a mathematical model that aims to express a real problem facing factories and companies which is characterized by the guarantee of its activity in order to make the appropriate decision to get the best allocation of machines or jobs or workers on machines in order to increase efficiency or profits to the highest possible level or reduce costs or time To the extent possible, and in this research has been using the method of labeling to solve the problem of the fuzzy assignment of real data has been approved by the tire factory Diwaniya, where the data included two factors are the factors of efficiency and cost, and was solved manually by a number of iterations until reaching the optimization solution,
... Show MoreThe physical behavior for the energy distribution function (EDF) of the reactant particles depending upon the gases (fuel) temperature are completely described by a physical model covering the global formulas controlling the EDF profile. Results about the energy distribution for the reactant system indicate a standard EDF, in which it’s arrive a steady state form shape and intern lead to fix the optimum selected temperature.
In recent years, the attention of researchers has increased of semi-parametric regression models, because it is possible to integrate the parametric and non-parametric regression models in one and then form a regression model has the potential to deal with the cruse of dimensionality in non-parametric models that occurs through the increasing of explanatory variables. Involved in the analysis and then decreasing the accuracy of the estimation. As well as the privilege of this type of model with flexibility in the application field compared to the parametric models which comply with certain conditions such as knowledge of the distribution of errors or the parametric models may
... Show MoreThis research aims to study the methods of reduction of dimensions that overcome the problem curse of dimensionality when traditional methods fail to provide a good estimation of the parameters So this problem must be dealt with directly . Two methods were used to solve the problem of high dimensional data, The first method is the non-classical method Slice inverse regression ( SIR ) method and the proposed weight standard Sir (WSIR) method and principal components (PCA) which is the general method used in reducing dimensions, (SIR ) and (PCA) is based on the work of linear combinations of a subset of the original explanatory variables, which may suffer from the problem of heterogeneity and the problem of linear
... Show MoreMixture experiments are response variables based on the proportions of component for this mixture. In our research we will compare the scheffʼe model with the kronecker model for the mixture experiments, especially when the experimental area is restricted.
Because of the experience of the mixture of high correlation problem and the problem of multicollinearity between the explanatory variables, which has an effect on the calculation of the Fisher information matrix of the regression model.
to estimate the parameters of the mixture model, we used the (generalized inverse ) And the Stepwise Regression procedure
... Show More