This research includes the study of dual data models with mixed random parameters, which contain two types of parameters, the first is random and the other is fixed. For the random parameter, it is obtained as a result of differences in the marginal tendencies of the cross sections, and for the fixed parameter, it is obtained as a result of differences in fixed limits, and random errors for each section. Accidental bearing the characteristic of heterogeneity of variance in addition to the presence of serial correlation of the first degree, and the main objective in this research is the use of efficient methods commensurate with the paired data in the case of small samples, and to achieve this goal, the feasible general least squares method (FGLS) and the mean group method (MG) were used, and then the efficiency of the extracted estimators was compared in the case of mixed random parameters and the method that gives us the efficient estimator was chosen. Real data was applied that included the per capita consumption of electric energy (Y) for five countries, which represents the number of cross-sections (N = 5) over nine years (T = 9), so the number of observations is (n = 45) observations, and the explanatory variables are the consumer price index (X1) and the per capita GDP (X2). To evaluate the performance of the estimators of the (FGLS) method and the (MG) method on the general model, the mean absolute percentage error (MAPE) scale was used to compare the efficiency of the estimators. The results showed that the mean group estimation (MG) method is the best method for parameter estimation than the (FGLS) method. Also, the (MG) appeared to be the best and best method for estimating sub-parameters for each cross-section (country).
Theresearch took the spatial autoregressive model: SAR and spatial error model: SEM in an attempt to provide a practical evident that proves the importance of spatial analysis, with a particular focus on the importance of using regression models spatial andthat includes all of them spatial dependence, which we can test its presence or not by using Moran test. While ignoring this dependency may lead to the loss of important information about the phenomenon under research is reflected in the end on the strength of the statistical estimation power, as these models are the link between the usual regression models with time-series models. Spatial analysis had
... Show MoreAbstract
The methods of the Principal Components and Partial Least Squares can be regard very important methods in the regression analysis, whe
... Show MoreThe aesthetic contents of data visualization is one of the contemporary areas through which data scientists and designers have been able to link data to humans, and even after reaching successful attempts to model data visualization, it wasn't clear how that reveals how it contributed to choosing the aesthetic content as an input to humanize these models, so the goal of the current research is to use The analytical descriptive approach aims to identify the aesthetic contents in data visualization, which the researchers interpreted through pragmatic philosophy and Kantian philosophy, and analyze a sample of data visualization models to reveal the aesthetic entrances in them to explain how to humanize them. The two researchers reached seve
... Show MoreThe issue of image captioning, which comprises automatic text generation to understand an image’s visual information, has become feasible with the developments in object recognition and image classification. Deep learning has received much interest from the scientific community and can be very useful in real-world applications. The proposed image captioning approach involves the use of Convolution Neural Network (CNN) pre-trained models combined with Long Short Term Memory (LSTM) to generate image captions. The process includes two stages. The first stage entails training the CNN-LSTM models using baseline hyper-parameters and the second stage encompasses training CNN-LSTM models by optimizing and adjusting the hyper-parameters of
... Show MoreThe aim of this research is to use robust technique by trimming, as the analysis of maximum likelihood (ML) often fails in the case of outliers in the studied phenomenon. Where the (MLE) will lose its advantages because of the bad influence caused by the Outliers. In order to address this problem, new statistical methods have been developed so as not to be affected by the outliers. These methods have robustness or resistance. Therefore, maximum trimmed likelihood: (MTL) is a good alternative to achieve more results. Acceptability and analogies, but weights can be used to increase the efficiency of the resulting capacities and to increase the strength of the estimate using the maximum weighted trimmed likelihood (MWTL). In order to perform t
... Show MoreThe gravity method is a measurement of relatively noticeable variations in the Earth’s gravitational field caused by lateral variations in rock's density. In the current research, a new technique is applied on the previous Bouguer map of gravity surveys (conducted from 1940–1950) of the last century, by selecting certain areas in the South-Western desert of Iraqi-territory within the provinces' administrative boundary of Najaf and Anbar. Depending on the theory of gravity inversion where gravity values could be reflected to density-contrast variations with the depths; so, gravity data inversion can be utilized to calculate the models of density and velocity from four selected depth-slices 9.63 Km, 1.1 Km, 0.682 Km and 0.407 Km.
... Show MoreThe purpose of this paper is to model and forecast the white oil during the period (2012-2019) using volatility GARCH-class. After showing that squared returns of white oil have a significant long memory in the volatility, the return series based on fractional GARCH models are estimated and forecasted for the mean and volatility by quasi maximum likelihood QML as a traditional method. While the competition includes machine learning approaches using Support Vector Regression (SVR). Results showed that the best appropriate model among many other models to forecast the volatility, depending on the lowest value of Akaike information criterion and Schwartz information criterion, also the parameters must be significant. In addition, the residuals
... Show MoreAn efficient combination of Adomian Decomposition iterative technique coupled with Laplace transformation to solve non-linear Random Integro differential equation (NRIDE) is introduced in a novel way to get an accurate analytical solution. This technique is an elegant combination of theLaplace transform, and the Adomian polynomial. The suggested method will convert differential equations into iterative algebraic equations, thus reducing processing and analytical work. The technique solves the problem of calculating the Adomian polynomials. The method’s efficiency was investigated using some numerical instances, and the findings demonstrate that it is easier to use than many other numerical procedures. It has also been established that (LT
... Show MoreAbstract\
In this research we built a mathematical model of the transportation problem for data of General Company for Grain Under the environment of variable demand ,and situations of incapableness to determining the supply required quantities as a result of economic and commercial reasons, also restrict flow of grain amounts was specified to a known level by the decision makers to ensure that the stock of reserves for emergency situations that face the company from decrease, or non-arrival of the amount of grain to silos , also it took the capabilities of the tanker into consideration and the grain have been restricted to avoid shortages and lack of processing capability, Function has been adopted
... Show More