In this research, the nonparametric technique has been presented to estimate the time-varying coefficients functions for the longitudinal balanced data that characterized by observations obtained through (n) from the independent subjects, each one of them is measured repeatedly by group of specific time points (m). Although the measurements are independent among the different subjects; they are mostly connected within each subject and the applied techniques is the Local Linear kernel LLPK technique. To avoid the problems of dimensionality, and thick computation, the two-steps method has been used to estimate the coefficients functions by using the two former technique. Since, the two-steps method depends, in estimation, on (OLS) method, which is sensitive for the existence of abnormality in data or contamination of error; robust methods have been proposed such as LAD & M to strengthen the two-steps method towards the abnormality and contamination of error. In this research imitating experiments have been performed, with verifying the performance of the traditional and robust methods for Local Linear kernel LLPK technique by using two criteria, for different sample sizes and disparity levels.
Antioxidant status imbalance and inflammatory process are cooperative events involved in type 2 diabetes mellitus. This study aimed to investigate superoxide dismutase as a potential biomarkers of antioxidant imbalance, matrix-metaloprotinase-9, and interleukin -18 as biomarkers of inflammation in serum and to estimate the effects of other confounding factors gender, age and finally measuring the relation among the interested biomarkers.
This case - control study included 50 patients, and 45 of healthy subjects matched age –gender were also enrolled in this study as a control group. The focused  
... Show MoreThis paper deals with, Bayesian estimation of the parameters of Gamma distribution under Generalized Weighted loss function, based on Gamma and Exponential priors for the shape and scale parameters, respectively. Moment, Maximum likelihood estimators and Lindley’s approximation have been used effectively in Bayesian estimation. Based on Monte Carlo simulation method, those estimators are compared in terms of the mean squared errors (MSE’s).
Copula modeling is widely used in modern statistics. The boundary bias problem is one of the problems faced when estimating by nonparametric methods, as kernel estimators are the most common in nonparametric estimation. In this paper, the copula density function was estimated using the probit transformation nonparametric method in order to get rid of the boundary bias problem that the kernel estimators suffer from. Using simulation for three nonparametric methods to estimate the copula density function and we proposed a new method that is better than the rest of the methods by five types of copulas with different sample sizes and different levels of correlation between the copula variables and the different parameters for the function. The
... Show MoreCollapse of the vapor bubble condensing in an immiscible is investigated for n-pentane and n-hexane vapors condensing in cold water and n-pentane in two different compositions of glycerin- water mixture. The rise velocity and the drag coefficient of the two-phase bubble are measured.
Crystalline silicon (c-Si) has low optical absorption due to its high surface reflection of incident light. Nanotexturing of c-Si which produces black silicon (b-Si) offers a promising solution. In this work, effect of H2O2 concentrations towards surface morphological and optical properties of b-Si fabricated by two-step silver-assisted wet chemical etching (Ag-based two-step MACE) for potential photovoltaic (PV) applications is presented. The method involves a 30 s deposition of silver nanoparticles (Ag NPs) in an aqueous solution of AgNO3:HF (5:6) and an optimized etching in HF:H2O2:DI H2O solution under 0.62 M, 1.85 M, 2.47 M, and 3.7 M concentrations of H2O<
... Show MoreCluster analysis (clustering) is mainly concerned with dividing a number of data elements into clusters. The paper applies this method to create a gathering of symmetrical government agencies with the aim to classify them and understand how far they are close to each other in terms of administrative and financial corruption by means of five variables representing the prevalent administrative and financial corruption in the state institutions. Cluster analysis has been applied to each of these variables to understand the extent to which these agencies are close to other in each of the cases related to the administrative and financial corruption.
In some cases, researchers need to know the causal effect of the treatment in order to know the extent of the effect of the treatment on the sample in order to continue to give the treatment or stop the treatment because it is of no use. The local weighted least squares method was used to estimate the parameters of the fuzzy regression discontinuous model, and the local polynomial method was used to estimate the bandwidth. Data were generated with sample sizes (75,100,125,150 ) in repetition 1000. An experiment was conducted at the Innovation Institute for remedial lessons in 2021 for 72 students participating in the institute and data collection. Those who used the treatment had an increase in their score after
... Show MoreBecause of the experience of the mixture problem of high correlation and the existence of linear MultiCollinearity between the explanatory variables, because of the constraint of the unit and the interactions between them in the model, which increases the existence of links between the explanatory variables and this is illustrated by the variance inflation vector (VIF), L-Pseudo component to reduce the bond between the components of the mixture.
To estimate the parameters of the mixture model, we used in our research the use of methods that increase bias and reduce variance, such as the Ridge Regression Method and the Least Absolute Shrinkage and Selection Operator (LASSO) method a
... Show MoreSupport vector machines (SVMs) are supervised learning models that analyze data for classification or regression. For classification, SVM is widely used by selecting an optimal hyperplane that separates two classes. SVM has very good accuracy and extremally robust comparing with some other classification methods such as logistics linear regression, random forest, k-nearest neighbor and naïve model. However, working with large datasets can cause many problems such as time-consuming and inefficient results. In this paper, the SVM has been modified by using a stochastic Gradient descent process. The modified method, stochastic gradient descent SVM (SGD-SVM), checked by using two simulation datasets. Since the classification of different ca
... Show More