Abstract The wavelet shrink estimator is an attractive technique when estimating the nonparametric regression functions, but it is very sensitive in the case of a correlation in errors. In this research, a polynomial model of low degree was used for the purpose of addressing the boundary problem in the wavelet reduction in addition to using flexible threshold values in the case of Correlation in errors as it deals with those transactions at each level separately, unlike the comprehensive threshold values that deal with all levels simultaneously, as (Visushrink) methods, (False Discovery Rate) method, (Improvement Thresholding) and (Sureshrink method), as the study was conducted on real monthly data represented in the rates of theft crimes for juveniles in Iraq, specifically the Baghdad governorate, and the risk ratios about those crimes for the years 2008-2018, with a sample size of (128) (Sureshrink) The study also showed an increase in the rate of theft crimes for juveniles in recent years.
In this research the Empirical Bayes method is used to Estimate the affiliation parameter in the clinical trials and then we compare this with the Moment Estimates for this parameter using Monte Carlo stimulation , we assumed that the distribution of the observation is binomial distribution while the distribution with the unknown random parameters is beta distribution ,finally we conclude that the Empirical bayes method for the random affiliation parameter is efficient using Mean Squares Error (MSE) and for different Sample size .
Survival analysis is the analysis of data that are in the form of times from the origin of time until the occurrence of the end event, and in medical research, the origin of time is the date of registration of the individual or the patient in a study such as clinical trials to compare two types of medicine or more if the endpoint It is the death of the patient or the disappearance of the individual. The data resulting from this process is called survival times. But if the end is not death, the resulting data is called time data until the event. That is, survival analysis is one of the statistical steps and procedures for analyzing data when the adopted variable is time to event and time. It could be d
... Show MoreIn this paper, the maximum likelihood estimates for parameter ( ) of two parameter's Weibull are studied, as well as white estimators and (Bain & Antle) estimators, also Bayes estimator for scale parameter ( ), the simulation procedures are used to find the estimators and comparing between them using MSE. Also the application is done on the data for 20 patients suffering from a headache disease.
Abstract
Suffering the human because of pressure normal life of exposure to several types of heart disease as a result of due to different factors. Therefore, and in order to find out the case of a death whether or not, are to be modeled using binary logistic regression model
In this research used, one of the most important models of nonlinear regression models extensive use in the modeling of applications statistical, in terms of heart disease which is the binary logistic regression model. and then estimating the parameters of this model using the statistical estimation methods, another problem will be appears in estimating its parameters, as well as when the numbe
... Show MoreThe measurement data of the raw water quality of Tigris River were statistically analyzed to measure the salinity value in relation to the selected raw water quality parameters. The analyzed data were collected from five water treatment plants (WTPs) assembled alongside of the Tigris River in Baghdad: Al-Karkh, Al-Karama, Al-Qadisiya, Al-Dora, and Al-Wihda for the period from 2015 to 2021. The selected parameters are total dissolved solid (TDS), electrical conductivity (EC), pH and temperature. The main objective of this research is to predicate a mathematical model using SPSS software to calculate the value of salinity along the river, in addition, the effect of electrical conductivi
Variable selection in Poisson regression with high dimensional data has been widely used in recent years. we proposed in this paper using a penalty function that depends on a function named a penalty. An Atan estimator was compared with Lasso and adaptive lasso. A simulation and application show that an Atan estimator has the advantage in the estimation of coefficient and variables selection.
The purpose behind building the linear regression model is to describe the real linear relation between any explanatory variable in the model and the dependent one, on the basis of the fact that the dependent variable is a linear function of the explanatory variables and one can use it for prediction and control. This purpose does not cometrue without getting significant, stable and reasonable estimatros for the parameters of the model, specifically regression-coefficients. The researcher found that "RUF" the criterian that he had suggested accurate and sufficient to accomplish that purpose when multicollinearity exists provided that the adequate model that satisfies the standard assumpitions of the error-term can be assigned. It
... Show MoreThe present study has three objectives: 1) to investigate the prevalence of complex nominals in economic discourse represented via the selected business news texts, 2) to shed some light on the most common translation errors made by second year students in the Department of Translation in rendering complex nominals into Arabic, and 3) to detect the possible causes behind such translation errors and suggest some translation tips which might sound helpful to the students of translation to find the most suitable translation equivalent. The present study is based on an empirical survey in which a selective analysis of someeconomic texts represented in business news texts is made. A corpus of 159 complex nominals was selected from seven busin
... Show MoreAbstract
In this research we been estimated the survival function for data suffer from the disturbances and confusion of Iraq Household Socio-Economic Survey: IHSES II 2012 , to data from a five-year age groups follow the distribution of the Generalized Gamma: GG. It had been used two methods for the purposes of estimating and fitting which is the way the Principle of Maximizing Entropy: POME, and method of booting to nonparametric smoothing function for Kernel, to overcome the mathematical problems plaguing integrals contained in this distribution in particular of the integration of the incomplete gamma function, along with the use of traditional way in which is the Maximum Likelihood: ML. Where the comparison on t
... Show MoreFG Mohammed, HM Al-Dabbas, Science International, 2018 - Cited by 2