In this research, the methods of Kernel estimator (nonparametric density estimator) were relied upon in estimating the two-response logistic regression, where the comparison was used between the method of Nadaraya-Watson and the method of Local Scoring algorithm, and optimal Smoothing parameter λ was estimated by the methods of Cross-validation and generalized Cross-validation, bandwidth optimal λ has a clear effect in the estimation process. It also has a key role in smoothing the curve as it approaches the real curve, and the goal of using the Kernel estimator is to modify the observations so that we can obtain estimators with characteristics close to the properties of real parameters, and based on medical data for patients with chronic lymphocytic leukemia and through the use of the Gaussian function and based on the comparison criterion (MSE) it was found that the Nadaraya -Watson method is the best because it obtained the lowest value for this criterion.
As the process of estimate for model and variable selection significant is a crucial process in the semi-parametric modeling At the beginning of the modeling process often At there are many explanatory variables to Avoid the loss of any explanatory elements may be important as a result , the selection of significant variables become necessary , so the process of variable selection is not intended to simplifying model complexity explanation , and also predicting. In this research was to use some of the semi-parametric methods (LASSO-MAVE , MAVE and The proposal method (Adaptive LASSO-MAVE) for variable selection and estimate semi-parametric single index model (SSIM) at the same time .
... Show MoreThe analysis of the classic principal components are sensitive to the outliers where they are calculated from the characteristic values and characteristic vectors of correlation matrix or variance Non-Robust, which yields an incorrect results in the case of these data contains the outliers values. In order to treat this problem, we resort to use the robust methods where there are many robust methods Will be touched to some of them.
The robust measurement estimators include the measurement of direct robust estimators for characteristic values by using characteristic vectors without relying on robust estimators for the variance and covariance matrices. Also the analysis of the princ
... Show MoreChemical pollution is a very important issue that people suffer from and it often affects the nature of health of society and the future of the health of future generations. Consequently, it must be considered in order to discover suitable models and find descriptions to predict the performance of it in the forthcoming years. Chemical pollution data in Iraq take a great scope and manifold sources and kinds, which brands it as Big Data that need to be studied using novel statistical methods. The research object on using Proposed Nonparametric Procedure NP Method to develop an (OCMT) test procedure to estimate parameters of linear regression model with large size of data (Big Data) which comprises many indicators associated with chemi
... Show MoreThe aim of this research is to use robust technique by trimming, as the analysis of maximum likelihood (ML) often fails in the case of outliers in the studied phenomenon. Where the (MLE) will lose its advantages because of the bad influence caused by the Outliers. In order to address this problem, new statistical methods have been developed so as not to be affected by the outliers. These methods have robustness or resistance. Therefore, maximum trimmed likelihood: (MTL) is a good alternative to achieve more results. Acceptability and analogies, but weights can be used to increase the efficiency of the resulting capacities and to increase the strength of the estimate using the maximum weighted trimmed likelihood (MWTL). In order to perform t
... Show MoreThe objective of the research , is to shed light on the most important treatment of the problem of missing values of time series data and its influence in simple linear regression. This research deals with the effect of the missing values in independent variable only. This was carried out by proposing missing value from time series data which is complete originally and testing the influence of the missing value on simple regression analysis of data of an experiment related with the effect of the quantity of consumed ration on broilers weight for 15 weeks. The results showed that the missing value had not a significant effect as the estimated model after missing value was consistent and significant statistically. The results also
... Show MoreAnalysis of variance (ANOVA) is one of the most widely used methods in statistics to analyze the behavior of one variable compared to another. The data were collected from a sample size of 65 adult males who were nonsmokers, light smokers, or heavy smokers. The aim of this study is to analyze the effects of cigarette smoking on high-density lipoprotein cholesterol (HDL-C) level and determine whether smoking causes a reduction in this level, by using the completely randomized design (CRD) and Kruskal- Wallis method. The results showed that the assumptions of the one- way ANOVA are not satisfied, while, after transforming original data by using log transformation, they are satisfied. From the results, a significantly
... Show More
Regression testing is a crucial phase in the software development lifecycle that makes sure that new changes/updates in the software system don’t introduce defects or don’t affect adversely the existing functionalities. However, as the software systems grow in complexity, the number of test cases in regression suite can become large which results into more testing time and resource consumption. In addition, the presence of redundant and faulty test cases may affect the efficiency of the regression testing process. Therefore, this paper presents a new Hybrid Framework to Exclude Similar & Faulty Test Cases in Regression Testing (ETCPM) that utilizes automated code analysis techniques and historical test execution data to
... Show MoreThe current study aims to compare between the assessments of the Rush model’s parameters to the missing and completed data in various ways of processing the missing data. To achieve the aim of the present study, the researcher followed the following steps: preparing Philip Carter test for the spatial capacity which consists of (20) items on a group of (250) sixth scientific stage students in the directorates of Baghdad Education at Al–Rusafa (1st, 2nd and 3rd) for the academic year (2018-2019). Then, the researcher relied on a single-parameter model to analyze the data. The researcher used Bilog-mg3 model to check the hypotheses, data and match them with the model. In addition
... Show More