Mixed-effects conditional logistic regression is evidently more effective in the study of qualitative differences in longitudinal pollution data as well as their implications on heterogeneous subgroups. This study seeks that conditional logistic regression is a robust evaluation method for environmental studies, thru the analysis of environment pollution as a function of oil production and environmental factors. Consequently, it has been established theoretically that the primary objective of model selection in this research is to identify the candidate model that is optimal for the conditional design. The candidate model should achieve generalizability, goodness-of-fit, parsimony and establish equilibrium between bias and variability. In the practical sphere it is however more realistic to capture the most significant parameters of the research design through the best fitted candidate model for this research. Simulation studies demonstrate that the mixed-effects conditional logistic regression is more accurate for pollution studies, with fixed-effects conditional logistic regression models potentially generating flawed conclusions. This is because mixed-effects conditional logistic regression provides detailed insights on clusters that were largely overlooked by fixed-effects conditional logistic regression.
(Use of models of game theory in determining the policies to maximize profits for the Pepsi Cola and Coca-Cola in the province of Baghdad)
Due to the importance of the theory of games especially theories of oligopoly in the study of the reality of competition among companies or governments and others the researcher linked theories of oligopoly to Econometrics to include all the policies used by companies after these theories were based on price and quantity only the researcher applied these theories to data taken from Pepsi Cola and Coca-Cola In Baghdad Steps of the solution where stated for the models proposed and solutions where found to be balance points is for the two companies according to the princi
... Show MoreNonlinear time series analysis is one of the most complex problems ; especially the nonlinear autoregressive with exogenous variable (NARX) .Then ; the problem of model identification and the correct orders determination considered the most important problem in the analysis of time series . In this paper , we proposed splines estimation method for model identification , then we used three criterions for the correct orders determination. Where ; proposed method used to estimate the additive splines for model identification , And the rank determination depends on the additive property to avoid the problem of curse dimensionally . The proposed method is one of the nonparametric methods , and the simulation results give a
... Show MoreResearchers have increased interest in recent years in determining the optimum sample size to obtain sufficient accuracy and estimation and to obtain high-precision parameters in order to evaluate a large number of tests in the field of diagnosis at the same time. In this research, two methods were used to determine the optimum sample size to estimate the parameters of high-dimensional data. These methods are the Bennett inequality method and the regression method. The nonlinear logistic regression model is estimated by the size of each sampling method in high-dimensional data using artificial intelligence, which is the method of artificial neural network (ANN) as it gives a high-precision estimate commensurate with the dat
... Show MoreThe logistic regression model is one of the oldest and most common of the regression models, and it is known as one of the statistical methods used to describe and estimate the relationship between a dependent random variable and explanatory random variables. Several methods are used to estimate this model, including the bootstrap method, which is one of the estimation methods that depend on the principle of sampling with return, and is represented by a sample reshaping that includes (n) of the elements drawn by randomly returning from (N) from the original data, It is a computational method used to determine the measure of accuracy to estimate the statistics, and for this reason, this method was used to find more accurate estimates. The ma
... Show MoreMany of the dynamic processes in different sciences are described by models of differential equations. These models explain the change in the behavior of the studied process over time by linking the behavior of the process under study with its derivatives. These models often contain constant and time-varying parameters that vary according to the nature of the process under study in this We will estimate the constant and time-varying parameters in a sequential method in several stages. In the first stage, the state variables and their derivatives are estimated in the method of penalized splines(p- splines) . In the second stage we use pseudo lest square to estimate constant parameters, For the third stage, the rem
... Show MoreThis paper proposed a new method to study functional non-parametric regression data analysis with conditional expectation in the case that the covariates are functional and the Principal Component Analysis was utilized to de-correlate the multivariate response variables. It utilized the formula of the Nadaraya Watson estimator (K-Nearest Neighbour (KNN)) for prediction with different types of the semi-metrics, (which are based on Second Derivative and Functional Principal Component Analysis (FPCA)) for measureing the closeness between curves. Root Mean Square Errors is used for the implementation of this model which is then compared to the independent response method. R program is used for analysing data. Then, when the cov
... Show MoreBootstrap is one of an important re-sampling technique which has given the attention of researches recently. The presence of outliers in the original data set may cause serious problem to the classical bootstrap when the percentage of outliers are higher than the original one. Many methods are proposed to overcome this problem such Dynamic Robust Bootstrap for LTS (DRBLTS) and Weighted Bootstrap with Probability (WBP). This paper try to show the accuracy of parameters estimation by comparison the results of both methods. The bias , MSE and RMSE are considered. The criterion of the accuracy is based on the RMSE value since the method that provide us RMSE value smaller than other is con
... Show MoreThe paper aims at initiating and exploring the concept of extended metric known as the Strong Altering JS-metric, a stronger version of the Altering JS-metric. The interrelation of Strong Altering JS-metric with the b-metric and dislocated metric has been analyzed and some examples have been provided. Certain theorems on fixed points for expansive self-mappings in the setting of complete Strong Altering JS-metric space have also been discussed.
Research Summary:
Seeking happiness and searching for it have been among the priorities of mankind from the beginning of his creation and will remain so until the end of this world, and even in the next life, he seeks happiness, but the difference is that a person can work in this world to obtain it, but in the next life he is expected to get what he done in this world. And among these reasons are practical actions that a person undertakes while he intends to draw close to God Almighty, so they lead him to attain his desired perfection, and to attain his goals and objectives, which is the minimum happiness in this life, and ultimate happiness after the soul separates the body, and on the day of the judgment, Amon
... Show MoreThe prediction process of time series for some time-related phenomena, in particular, the autoregressive integrated moving average(ARIMA) models is one of the important topics in the theory of time series analysis in the applied statistics. Perhaps its importance lies in the basic stages in analyzing of the structure or modeling and the conditions that must be provided in the stochastic process. This paper deals with two methods of predicting the first was a special case of autoregressive integrated moving average which is ARIMA (0,1,1) if the value of the parameter equal to zero, then it is called Random Walk model, the second was the exponential weighted moving average (EWMA). It was implemented in the data of the monthly traff
... Show More