The purpose of this paper is to model and forecast the white oil during the period (2012-2019) using volatility GARCH-class. After showing that squared returns of white oil have a significant long memory in the volatility, the return series based on fractional GARCH models are estimated and forecasted for the mean and volatility by quasi maximum likelihood QML as a traditional method. While the competition includes machine learning approaches using Support Vector Regression (SVR). Results showed that the best appropriate model among many other models to forecast the volatility, depending on the lowest value of Akaike information criterion and Schwartz information criterion, also the parameters must be significant. In addition, the residuals don’t have the serial correlation and ARCH effect, as well as these models, should have a higher value of log-likelihood and SVR-FIGARCH models managed to outperform FIGARCH models with normal and student’s t distributions. The SVR-FIGARCH model exhibited statistical significance and improved accuracy obtained with the SVM technique. Finally, we evaluate the forecasting performance of the various volatility models, and then we choose the best fitting model to forecast the volatility for each series, depending on three forecasting accuracy measures RMSE, MAE, and MAPE.
This paper presents a comparative study between different oil production enhancement scenarios in the Saadi tight oil reservoir located in the Halfaya Iraqi oil field. The reservoir exhibits poor petrophysical characteristics, including medium pore size, low permeability (reaching zero in some areas), and high porosity of up to 25%. Previous stimulation techniques such as acid fracturing and matrix acidizing have yielded low oil production in this reservoir. Therefore, the feasibility of hydraulic fracturing stimulation and/or horizontal well drilling scenarios was assessed to increase the production rate. While horizontal drilling and hydraulic fracturing can improve well performance, they come with high costs, often accounting for up t
... Show MoreMishrif Formation is the main reservoir in Amara Oil Field. It is divided into three units (MA, TZ1, and MB12). Geological model is important to build reservoir model that was built by Petrel -2009. FZI method was used to determine relationship between porosity and permeability for core data and permeability values for the uncored interval for Mishrif formation. A reservoir simulation model was adopted in this study using Eclipse 100. In this model, production history matching executed by production data for (AM1, AM4) wells since 2001 to 2015. Four different prediction cases have been suggested in the future performance of Mishrif reservoir for ten years extending from June 2015 to June 2025. The comparison has been mad
... Show MoreFinding orthogonal matrices in different sizes is very complex and important because it can be used in different applications like image processing and communications (eg CDMA and OFDM). In this paper we introduce a new method to find orthogonal matrices by using tensor products between two or more orthogonal matrices of real and imaginary numbers with applying it in images and communication signals processing. The output matrices will be orthogonal matrices too and the processing by our new method is very easy compared to other classical methods those use basic proofs. The results are normal and acceptable in communication signals and images but it needs more research works.
In this research, some robust non-parametric methods were used to estimate the semi-parametric regression model, and then these methods were compared using the MSE comparison criterion, different sample sizes, levels of variance, pollution rates, and three different models were used. These methods are S-LLS S-Estimation -local smoothing, (M-LLS)M- Estimation -local smoothing, (S-NW) S-Estimation-NadaryaWatson Smoothing, and (M-NW) M-Estimation-Nadarya-Watson Smoothing.
The results in the first model proved that the (S-LLS) method was the best in the case of large sample sizes, and small sample sizes showed that the
... Show MoreDatabase is characterized as an arrangement of data that is sorted out and disseminated in a way that allows the client to get to the data being put away in a simple and more helpful way. However, in the era of big-data the traditional methods of data analytics may not be able to manage and process the large amount of data. In order to develop an efficient way of handling big-data, this work studies the use of Map-Reduce technique to handle big-data distributed on the cloud. This approach was evaluated using Hadoop server and applied on EEG Big-data as a case study. The proposed approach showed clear enhancement for managing and processing the EEG Big-data with average of 50% reduction on response time. The obtained results provide EEG r
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for
With the increasing demands to use remote sensing approaches, such as aerial photography, satellite imagery, and LiDAR in archaeological applications, there is still a limited number of studies assessing the differences between remote sensing methods in extracting new archaeological finds. Therefore, this work aims to critically compare two types of fine-scale remotely sensed data: LiDAR and an Unmanned Aerial Vehicle (UAV) derived Structure from Motion (SfM) photogrammetry. To achieve this, aerial imagery and airborne LiDAR datasets of Chun Castle were acquired, processed, analyzed, and interpreted. Chun Castle is one of the most remarkable ancient sites in Cornwall County (Southwest England) that had not been surveyed and explored
... Show MoreOne of the most important intellectual issues, which receive attention is the issue of modernity, it has occupied the scholars of all time. However, modern poetry used to have special care in Iraq and in the Arab countries .. it is not strange that the concept of modernity is linked to history .. or having history as the most important dimension of its dimensions .. because Arab poetry is historical and it is moving into an area of the past that is still active in terms of language and literary image .. When some poets found that changing and modernizing poetry became a necessity of evolution, and one of the fundamentals of modernization, the cultural impact did not respond to the desire of poets, and their impulse in modern poetry
... Show MoreThe present study aims to present a proposed realistic and comprehensive cyber strategy for the Communications Directorate for the next five years (2022-2026) based on the extent of application and documentation of cybersecurity measures in the Directorate and the scientific bases formulating the strategy. The present study is significant in that it provides an accurate diagnosis of the capabilities of the cyber directorate in terms of strengths and weaknesses in its internal environment and the opportunities and threats that surround it in the external environment, based on the results of the assessment of the reality of cybersecurity according to the global Cybersecurity index, which provides a strong basis for building its strategic dire
... Show MoreThis paper deals the prediction of the process of random spatial data of two properties, the first is called Primary variables and the second is called secondary variables , the method that were used in the prediction process for this type of data is technique Co-kriging , the method is usually used when the number of primary variables meant to predict for one of its elements is measured in a particular location a few (because of the cost or difficulty of obtaining them) compare with secondary variable which is the number of elements are available and highly correlated with primary variables, as was the&nbs
... Show More