Mixed-effects conditional logistic regression is evidently more effective in the study of qualitative differences in longitudinal pollution data as well as their implications on heterogeneous subgroups. This study seeks that conditional logistic regression is a robust evaluation method for environmental studies, thru the analysis of environment pollution as a function of oil production and environmental factors. Consequently, it has been established theoretically that the primary objective of model selection in this research is to identify the candidate model that is optimal for the conditional design. The candidate model should achieve generalizability, goodness-of-fit, parsimony and establish equilibrium between bias and variability. In the practical sphere it is however more realistic to capture the most significant parameters of the research design through the best fitted candidate model for this research. Simulation studies demonstrate that the mixed-effects conditional logistic regression is more accurate for pollution studies, with fixed-effects conditional logistic regression models potentially generating flawed conclusions. This is because mixed-effects conditional logistic regression provides detailed insights on clusters that were largely overlooked by fixed-effects conditional logistic regression.
The two parameters of Exponential-Rayleigh distribution were estimated using the maximum likelihood estimation method (MLE) for progressively censoring data. To find estimated values for these two scale parameters using real data for COVID-19 which was taken from the Iraqi Ministry of Health and Environment, AL-Karkh General Hospital. Then the Chi-square test was utilized to determine if the sample (data) corresponded with the Exponential-Rayleigh distribution (ER). Employing the nonlinear membership function (s-function) to find fuzzy numbers for these parameters estimators. Then utilizing the ranking function transforms the fuzzy numbers into crisp numbers. Finally, using mean square error (MSE) to compare the outcomes of the survival
... Show MoreAbstract
The research Compared two methods for estimating fourparametersof the compound exponential Weibull - Poisson distribution which are the maximum likelihood method and the Downhill Simplex algorithm. Depending on two data cases, the first one assumed the original data (Non-polluting), while the second one assumeddata contamination. Simulation experimentswere conducted for different sample sizes and initial values of parameters and under different levels of contamination. Downhill Simplex algorithm was found to be the best method for in the estimation of the parameters, the probability function and the reliability function of the compound distribution in cases of natural and contaminateddata.
... Show More
In this research, we find the Bayesian formulas and the estimation of Bayesian expectation for product system of Atlas Company. The units of the system have been examined by helping the technical staff at the company and by providing a real data the company which manufacturer the system. This real data include the failed units for each drawn sample, which represents the total number of the manufacturer units by the company system. We calculate the range for each estimator by using the Maximum Likelihood estimator. We obtain that the expectation-Bayesian estimation is better than the Bayesian estimator of the different partially samples which were drawn from the product system after it checked by the
... Show MoreIn this paper, we will discuss the performance of Bayesian computational approaches for estimating the parameters of a Logistic Regression model. Markov Chain Monte Carlo (MCMC) algorithms was the base estimation procedure. We present two algorithms: Random Walk Metropolis (RWM) and Hamiltonian Monte Carlo (HMC). We also applied these approaches to a real data set.
Business organizations have faced many challenges in recent times, most important of which is information technology, because it is widely spread and easy to use. Its use has led to an increase in the amount of data that business organizations deal with an unprecedented manner. The amount of data available through the internet is a problem that many parties seek to find solutions for. Why is it available there in this huge amount randomly? Many expectations have revealed that in 2017, there will be devices connected to the internet estimated at three times the population of the Earth, and in 2015 more than one and a half billion gigabytes of data was transferred every minute globally. Thus, the so-called data mining emerged as a
... Show MoreThat the main feature of the economics many countries in general is a tendency towards defining the role of the public sector in economic activity and the tendency towards encourage the private sector to investment in public projects especially in countries those tendency towards market economy actually.
That increased economic development proven failure in achieving more economic growth both individually in many countries especially developing countries socialist, by researchers this led one way or another to direction of corrective reforms in their economic was one of them in Transformation of public companies into Shareholding companies contributes to the public sector in resources and expertise
... Show MoreResearchers have increased interest in recent years in determining the optimum sample size to obtain sufficient accuracy and estimation and to obtain high-precision parameters in order to evaluate a large number of tests in the field of diagnosis at the same time. In this research, two methods were used to determine the optimum sample size to estimate the parameters of high-dimensional data. These methods are the Bennett inequality method and the regression method. The nonlinear logistic regression model is estimated by the size of each sampling method in high-dimensional data using artificial intelligence, which is the method of artificial neural network (ANN) as it gives a high-precision estimate commensurate with the dat
... Show MoreAbstract:-
The approach maintenance and replacement one of techniques of operations research whom cares of the failure experienced by a lot of production lines which consist of a set of machines and equipment, which in turn exposed to the failure or work stoppages over the lifetime, which requires reducing the working time of these machines or equipment below what can or conuct maintenance process once in a while or a replacement for one part of the machine or replace one of the machines in production lines. In this research is the study of the failure s that occur in some parts of one of the machines for the General Company for Vege
... Show MoreInventory or inventories are stocks of goods being held for future use or sale. The demand for a product in is the number of units that will need to be removed from inventory for use or sale during a specific period. If the demand for future periods can be predicted with considerable precision, it will be reasonable to use an inventory rule that assumes that all predictions will always be completely accurate. This is the case where we say that demand is deterministic.
The timing of an order can be periodic (placing an order every days) or perpetual (placing an order whenever the inventory declines to units).
in this research we discuss how to formulating inv
... Show More(Use of models of game theory in determining the policies to maximize profits for the Pepsi Cola and Coca-Cola in the province of Baghdad)
Due to the importance of the theory of games especially theories of oligopoly in the study of the reality of competition among companies or governments and others the researcher linked theories of oligopoly to Econometrics to include all the policies used by companies after these theories were based on price and quantity only the researcher applied these theories to data taken from Pepsi Cola and Coca-Cola In Baghdad Steps of the solution where stated for the models proposed and solutions where found to be balance points is for the two companies according to the princi
... Show More