The valley Dwiridj of drainage basins task that lies east of Iraq and thus we have in this study the application of tow models athletes on the three basins of the valley to get Mor e values accurate to Estimate the volume of runoff and peak discharge and time climax and through the use of Technology remote sensing (GIS),has been show through the application of both models, that the maximum value for the amount of Dwiridj valley of (1052/m3/s) According to Equation (SCS-CN) and about (1370.2/m3/s)by approach (GIUH) that difference is the amount of discharge to the Equation (SCS-CN) ar not accurate as(GIUH) approaches Equation ecalling the results of the Field ces Department of damand reservoirs that the volume of runoff to the valley wase estimated at (1280/m3 /s)and so this resultis is closer to the approach (GIUH).
In this study, we focused on the random coefficient estimation of the general regression and Swamy models of panel data. By using this type of data, the data give a better chance of obtaining a better method and better indicators. Entropy's methods have been used to estimate random coefficients for the general regression and Swamy of the panel data which were presented in two ways: the first represents the maximum dual Entropy and the second is general maximum Entropy in which a comparison between them have been done by using simulation to choose the optimal methods.
The results have been compared by using mean squares error and mean absolute percentage error to different cases in term of correlation valu
... Show MoreMixture experiments are response variables based on the proportions of component for this mixture. In our research we will compare the scheffʼe model with the kronecker model for the mixture experiments, especially when the experimental area is restricted.
Because of the experience of the mixture of high correlation problem and the problem of multicollinearity between the explanatory variables, which has an effect on the calculation of the Fisher information matrix of the regression model.
to estimate the parameters of the mixture model, we used the (generalized inverse ) And the Stepwise Regression procedure
... Show MoreThe aim of this research is to use robust technique by trimming, as the analysis of maximum likelihood (ML) often fails in the case of outliers in the studied phenomenon. Where the (MLE) will lose its advantages because of the bad influence caused by the Outliers. In order to address this problem, new statistical methods have been developed so as not to be affected by the outliers. These methods have robustness or resistance. Therefore, maximum trimmed likelihood: (MTL) is a good alternative to achieve more results. Acceptability and analogies, but weights can be used to increase the efficiency of the resulting capacities and to increase the strength of the estimate using the maximum weighted trimmed likelihood (MWTL). In order to perform t
... Show MoreWeibull distribution is considered as one of the most widely distribution applied in real life, Its similar to normal distribution in the way of applications, it's also considered as one of the distributions that can applied in many fields such as industrial engineering to represent replaced and manufacturing time ,weather forecasting, and other scientific uses in reliability studies and survival function in medical and communication engineering fields.
In this paper, The scale parameter has been estimated for weibull distribution using Bayesian method based on Jeffery prior information as a first method , then enhanced by improving Jeffery prior information and then used as a se
... Show MoreThe 3-parameter Weibull distribution is used as a model for failure since this distribution is proper when the failure rate somewhat high in starting operation and these rates will be decreased with increasing time .
In practical side a comparison was made between (Shrinkage and Maximum likelihood) Estimators for parameter and reliability function using simulation , we conclude that the Shrinkage estimators for parameters are better than maximum likelihood estimators but the maximum likelihood estimator for reliability function is the better using statistical measures (MAPE)and (MSE) and for different sample sizes.
Note:- ns : small sample ; nm=median sample
... Show MoreA condense study was done to compare between the ordinary estimators. In particular the maximum likelihood estimator and the robust estimator, to estimate the parameters of the mixed model of order one, namely ARMA(1,1) model.
Simulation study was done for a varieties the model. using: small, moderate and large sample sizes, were some new results were obtained. MAPE was used as a statistical criterion for comparison.
The aim of the thesis is to estimate the partial and inaccessible population groups, which is a field study to estimate the number of drug’s users in the Baghdad governorate for males who are (15-60) years old.
Because of the absence of data approved by government institutions, as well as the difficulty of estimating the numbers of these people from the traditional survey, in which the respondent expresses himself or his family members in some cases. In these challenges, the NSUM Network Scale-Up Method Is mainly based on asking respondents about the number of people they know in their network of drug addicts.
Based on this principle, a statistical questionnaire was designed to
... Show MoreDue to the lack of statistical researches in studying with existing (p) of Exogenous Input variables, and there contributed in time series phenomenon as a cause, yielding (q) of Output variables as a result in time series field, to form conceptual idea similar to the Classical Linear Regression that studies the relationship between dependent variable with explanatory variables. So highlight the importance of providing such research to a full analysis of this kind of phenomena important in consumer price inflation in Iraq. Were taken several variables influence and with a direct connection to the phenomenon and analyzed after treating the problem of outliers existence in the observations by (EM) approach, and expand the sample size (n=36) to
... Show MoreThe transfer function model the basic concepts in the time series. This model is used in the case of multivariate time series. As for the design of this model, it depends on the available data in the time series and other information in the series so when the representation of the transfer function model depends on the representation of the data In this research, the transfer function has been estimated using the style nonparametric represented in two method local linear regression and cubic smoothing spline method The method of semi-parametric represented use semiparametric single index model, With four proposals, , That the goal of this research is comparing the capabilities of the above mentioned m
... Show MoreThis research aims to review the importance of estimating the nonparametric regression function using so-called Canonical Kernel which depends on re-scale the smoothing parameter, which has a large and important role in Kernel and give the sound amount of smoothing .
We has been shown the importance of this method through the application of these concepts on real data refer to international exchange rates to the U.S. dollar against the Japanese yen for the period from January 2007 to March 2010. The results demonstrated preference the nonparametric estimator with Gaussian on the other nonparametric and parametric regression estima
... Show More