This research deals with a shrinking method concernes with the principal components similar to that one which used in the multiple regression “Least Absolute Shrinkage and Selection: LASS”. The goal here is to make an uncorrelated linear combinations from only a subset of explanatory variables that may have a multicollinearity problem instead taking the whole number say, (K) of them. This shrinkage will force some coefficients to equal zero, after making some restriction on them by some "tuning parameter" say, (t) which balances the bias and variance amount from side, and doesn't exceed the acceptable percent explained variance of these components. This had been shown by MSE criterion in the regression case and the percent explained variance in the principal components case.
The analysis of the classic principal components are sensitive to the outliers where they are calculated from the characteristic values and characteristic vectors of correlation matrix or variance Non-Robust, which yields an incorrect results in the case of these data contains the outliers values. In order to treat this problem, we resort to use the robust methods where there are many robust methods Will be touched to some of them.
The robust measurement estimators include the measurement of direct robust estimators for characteristic values by using characteristic vectors without relying on robust estimators for the variance and covariance matrices. Also the analysis of the princ
... Show MoreThis research aims to study the methods of reduction of dimensions that overcome the problem curse of dimensionality when traditional methods fail to provide a good estimation of the parameters So this problem must be dealt with directly . Two methods were used to solve the problem of high dimensional data, The first method is the non-classical method Slice inverse regression ( SIR ) method and the proposed weight standard Sir (WSIR) method and principal components (PCA) which is the general method used in reducing dimensions, (SIR ) and (PCA) is based on the work of linear combinations of a subset of the original explanatory variables, which may suffer from the problem of heterogeneity and the problem of linear
... Show MoreThe idea of carrying out research on incomplete data came from the circumstances of our dear country and the horrors of war, which resulted in the missing of many important data and in all aspects of economic, natural, health, scientific life, etc.,. The reasons for the missing are different, including what is outside the will of the concerned or be the will of the concerned, which is planned for that because of the cost or risk or because of the lack of possibilities for inspection. The missing data in this study were processed using Principal Component Analysis and self-organizing map methods using simulation. The variables of child health and variables affecting children's health were taken into account: breastfeed
... Show MoreThe tax system, like any other system, as a set of elements and parts that complement each other and are interrelated and interact to achieve specific goals, and is a natural reflection of the economic, social and political conditions prevailing in society, and therefore the objectives of tax policy formulated in line with the objectives of economic policy in general, which means that any change in economic policy clearly affects fiscal policy measures and fiscal policy in particular.
The problem of searching for the impact of foreign direct investment in the Iraqi tax system was focused on the study the of foreign direct investment and therole played in developing and improving the economic reality and its implicatio
... Show MoreOften phenomena suffer from disturbances in their data as well as the difficulty of formulation, especially with a lack of clarity in the response, or the large number of essential differences plaguing the experimental units that have been taking this data from them. Thus emerged the need to include an estimation method implicit rating of these experimental units using the method of discrimination or create blocks for each item of these experimental units in the hope of controlling their responses and make it more homogeneous. Because of the development in the field of computers and taking the principle of the integration of sciences it has been found that modern algorithms used in the field of Computer Science genetic algorithm or ant colo
... Show MoreThe problem of Bi-level programming is to reduce or maximize the function of the target by having another target function within the constraints. This problem has received a great deal of attention in the programming community due to the proliferation of applications and the use of evolutionary algorithms in addressing this kind of problem. Two non-linear bi-level programming methods are used in this paper. The goal is to achieve the optimal solution through the simulation method using the Monte Carlo method using different small and large sample sizes. The research reached the Branch Bound algorithm was preferred in solving the problem of non-linear two-level programming this is because the results were better.
في هذا البحث نحاول تسليط الضوء على إحدى طرائق تقدير المعلمات الهيكلية لنماذج المعادلات الآنية الخطية والتي تزودنا بتقديرات متسقة تختلف أحيانا عن تلك التي نحصل عليها من أساليب الطرائق التقليدية الأخرى وفق الصيغة العامة لمقدرات K-CLASS. وهذه الطريقة تعرف بطريقة الإمكان الأعظم محدودة المعلومات "LIML" أو طريقة نسبة التباين الصغرى"LVR
... Show MoreAbstract
The methods of the Principal Components and Partial Least Squares can be regard very important methods in the regression analysis, whe
... Show MoreIn this paper, an estimate has been made for parameters and the reliability function for Transmuted power function (TPF) distribution through using some estimation methods as proposed new technique for white, percentile, least square, weighted least square and modification moment methods. A simulation was used to generate random data that follow the (TPF) distribution on three experiments (E1 , E2 , E3) of the real values of the parameters, and with sample size (n=10,25,50 and 100) and iteration samples (N=1000), and taking reliability times (0< t < 0) . Comparisons have been made between the obtained results from the estimators using mean square error (MSE). The results showed the
... Show MoreAbstract
The logistic regression model is one of the nonlinear models that aims at obtaining highly efficient capabilities, It also the researcher an idea of the effect of the explanatory variable on the binary response variable. &nb
... Show More