The power generation of solar photovoltaic (PV) technology is being implemented in every nation worldwide due to its environmentally clean characteristics. Therefore, PV technology is significantly growing in the present applications and usage of PV power systems. Despite the strength of the PV arrays in power systems, the arrays remain susceptible to certain faults. An effective supply requires economic returns, the security of the equipment and humans, precise fault identification, diagnosis, and interruption tools. Meanwhile, the faults in unidentified arc lead to serious fire hazards to commercial, residential, and utility-scale PV systems. To ensure secure and dependable distribution of electricity, the detection of such hazards is crucial in the early phases of the distribution. In this paper, a detailed review of modern approaches for the identification of DC arc faults in PV is presented. In addition, a thorough comparison is performed between various DC arc-fault models, characteristics, and approaches used for the identification of the faults.
For modeling a photovoltaic module, it is necessary to calculate the basic parameters which control the current-voltage characteristic curves, that is not provided by the manufacturer. Generally, for mono crystalline silicon module, the shunt resistance is generally high, and it is neglected in this model. In this study, three methods are presented for four parameters model. Explicit simplified method based on an analytical solution, slope method based on manufacturer data, and iterative method based on a numerical resolution. The results obtained for these methods were compared with experimental measured data. The iterative method was more accurate than the other two methods but more complexity. The average deviation of
... Show MoreIn this paper, estimation of system reliability of the multi-components in stress-strength model R(s,k) is considered, when the stress and strength are independent random variables and follows the Exponentiated Weibull Distribution (EWD) with known first shape parameter θ and, the second shape parameter α is unknown using different estimation methods. Comparisons among the proposed estimators through Monte Carlo simulation technique were made depend on mean squared error (MSE) criteria
In this study, different methods were used for estimating location parameter and scale parameter for extreme value distribution, such as maximum likelihood estimation (MLE) , method of moment estimation (ME),and approximation estimators based on percentiles which is called white method in estimation, as the extreme value distribution is one of exponential distributions. Least squares estimation (OLS) was used, weighted least squares estimation (WLS), ridge regression estimation (Rig), and adjusted ridge regression estimation (ARig) were used. Two parameters for expected value to the percentile as estimation for distribution f
... Show MoreThe issue, the existence of God Almighty, and the creativity of the universes including the whale, and assets and how diversified, and faith in him and his lordship and divinity, is a delicate issue, and very important and dangerous, and it occupied human thought old and new, and still occupy it until God takes the land and on it. Many complex issues of thought, behavior, and ethics have resulted in the belief of many communities in the existence of the Almighty, having ruled their minds, depicting their beliefs and distancing their thoughts about slippage and abuse. When they looked at the wonders of creatures and the minutes of the assets, they thought about the planetary and astronomical motion systems. His existence was denied by ath
... Show MoreIn this paper, the methods of weighted residuals: Collocation Method (CM), Least Squares Method (LSM) and Galerkin Method (GM) are used to solve the thin film flow (TFF) equation. The weighted residual methods were implemented to get an approximate solution to the TFF equation. The accuracy of the obtained results is checked by calculating the maximum error remainder functions (MER). Moreover, the outcomes were examined in comparison with the 4th-order Runge-Kutta method (RK4) and good agreements have been achieved. All the evaluations have been successfully implemented by using the computer system Mathematica®10.
Classification of imbalanced data is an important issue. Many algorithms have been developed for classification, such as Back Propagation (BP) neural networks, decision tree, Bayesian networks etc., and have been used repeatedly in many fields. These algorithms speak of the problem of imbalanced data, where there are situations that belong to more classes than others. Imbalanced data result in poor performance and bias to a class without other classes. In this paper, we proposed three techniques based on the Over-Sampling (O.S.) technique for processing imbalanced dataset and redistributing it and converting it into balanced dataset. These techniques are (Improved Synthetic Minority Over-Sampling Technique (Improved SMOTE), Border
... Show MoreBecause of the experience of the mixture problem of high correlation and the existence of linear MultiCollinearity between the explanatory variables, because of the constraint of the unit and the interactions between them in the model, which increases the existence of links between the explanatory variables and this is illustrated by the variance inflation vector (VIF), L-Pseudo component to reduce the bond between the components of the mixture.
To estimate the parameters of the mixture model, we used in our research the use of methods that increase bias and reduce variance, such as the Ridge Regression Method and the Least Absolute Shrinkage and Selection Operator (LASSO) method a
... Show More