The purpose of this paper is to model and forecast the white oil during the period (2012-2019) using volatility GARCH-class. After showing that squared returns of white oil have a significant long memory in the volatility, the return series based on fractional GARCH models are estimated and forecasted for the mean and volatility by quasi maximum likelihood QML as a traditional method. While the competition includes machine learning approaches using Support Vector Regression (SVR). Results showed that the best appropriate model among many other models to forecast the volatility, depending on the lowest value of Akaike information criterion and Schwartz information criterion, also the parameters must be significant. In addition, the residuals don’t have the serial correlation and ARCH effect, as well as these models, should have a higher value of log-likelihood and SVR-FIGARCH models managed to outperform FIGARCH models with normal and student’s t distributions. The SVR-FIGARCH model exhibited statistical significance and improved accuracy obtained with the SVM technique. Finally, we evaluate the forecasting performance of the various volatility models, and then we choose the best fitting model to forecast the volatility for each series, depending on three forecasting accuracy measures RMSE, MAE, and MAPE.
A simple analytical method was used in the present work for the simultaneous quantification of Ciprofloxacin and Isoniazid in pharmaceutical preparations. UV-Visible spectrophotometry has been applied to quantify these compounds in pure and mixture solutions using the first-order derivative method. The method depends on the first derivative spectrophotometry using zero-cross, peak to baseline, peak to peak and peak area measurements. Good linearity was shown in the concentration range of 2 to 24 µg∙mL-1 for Ciprofloxacin and 2 to 22 µg∙mL-1 for Isoniazid in the mixture, and the correlation coefficients were 0.9990 and 0.9989 respectively using peak area mode. The limits of detection (LOD) and limits of quantification (LOQ) were
... Show MoreOn of the direct causes which led to the global financial crisis 2008 is decrease or collapse in liquidity of large financial institutions which is reflected on investments of a considerable number of institutions and persons.
This study aim's through out its three sections to explain the disclosure level of financial institutions which affected by Financial Crisis from liquidity information which explained in the statement of cash flow according to Timeliness and Completeness.
The study concluded an important result the company of research sample was disclosure in Timeliness and Completeness from all of accounting information is related in liquidity or that related in result of operations and financial position. The more
... Show MoreAbstract
In this study, we compare between the autoregressive approximations (Yule-Walker equations, Least Squares , Least Squares ( forward- backword ) and Burg’s (Geometric and Harmonic ) methods, to determine the optimal approximation to the time series generated from the first - order moving Average non-invertible process, and fractionally - integrated noise process, with several values for d (d=0.15,0.25,0.35,0.45) for different sample sizes (small,median,large)for two processes . We depend on figure of merit function which proposed by author Shibata in 1980, to determine the theoretical optimal order according to min
... Show MoreDifferent ANN architectures of MLP have been trained by BP and used to analyze Landsat TM images. Two different approaches have been applied for training: an ordinary approach (for one hidden layer M-H1-L & two hidden layers M-H1-H2-L) and one-against-all strategy (for one hidden layer (M-H1-1)xL, & two hidden layers (M-H1-H2-1)xL). Classification accuracy up to 90% has been achieved using one-against-all strategy with two hidden layers architecture. The performance of one-against-all approach is slightly better than the ordinary approach
There continues to be a need for an in-situ sensor system to monitor the engine oil of internal combustion engines. Engine oil needs to be monitored for contaminants and depletion of additives. While various sensor systems have been designed and evaluated, there is still a need to develop and evaluate new sensing technologies. This study evaluated Terahertz time-domain spectroscopy (THz-TDS) for the identification and estimation of the glycol contamination of automotive engine oil. Glycol contamination is a result of a gasket or seal leak allowing coolant to enter an engine and mix with the engine oil. An engine oil intended for use in both diesel and gasoline engines was obtained. Fresh engine oil samples were contaminated with fou
... Show MoreIn this paper, we introduce three robust fuzzy estimators of a location parameter based on Buckley’s approach, in the presence of outliers. These estimates were compared using the variance of fuzzy numbers criterion, all these estimates were best of Buckley’s estimate. of these, the fuzzy median was the best in the case of small and medium sample size, and in large sample size, the fuzzy trimmed mean was the best.
Gumbel distribution was dealt with great care by researchers and statisticians. There are traditional methods to estimate two parameters of Gumbel distribution known as Maximum Likelihood, the Method of Moments and recently the method of re-sampling called (Jackknife). However, these methods suffer from some mathematical difficulties in solving them analytically. Accordingly, there are other non-traditional methods, like the principle of the nearest neighbors, used in computer science especially, artificial intelligence algorithms, including the genetic algorithm, the artificial neural network algorithm, and others that may to be classified as meta-heuristic methods. Moreover, this principle of nearest neighbors has useful statistical featu
... Show MoreA new approach for baud time (or baud rate) estimation of a random binary signal is presented. This approach utilizes the spectrum of the signal after nonlinear processing in a way that the estimation error can be reduced by simply increasing the number of the processed samples instead of increasing the sampling rate. The spectrum of the new signal is shown to give an accurate estimate about the baud time when there is no apriory information or any restricting preassumptions. The performance of the estimator for random binary square waves perturbed by white Gaussian noise and ISI is evaluated and compared with that of the conventional estimator of the zero crossing detector.