In this paper we used frequentist and Bayesian approaches for the linear regression model to predict future observations for unemployment rates in Iraq. Parameters are estimated using the ordinary least squares method and for the Bayesian approach using the Markov Chain Monte Carlo (MCMC) method. Calculations are done using the R program. The analysis showed that the linear regression model using the Bayesian approach is better and can be used as an alternative to the frequentist approach. Two criteria, the root mean square error (RMSE) and the median absolute deviation (MAD) were used to compare the performance of the estimates. The results obtained showed that the unemployment rates will continue to increase in the next two decades.
the research ptesents a proposed method to compare or determine the linear equivalence of the key-stream from linear or nonlinear key-stream
In this article we study a single stochastic process model for the evaluate the assets pricing and stock.,On of the models le'vy . depending on the so –called Brownian subordinate as it has been depending on the so-called Normal Inverse Gaussian (NIG). this article aims as the estimate that the parameters of his model using my way (MME,MLE) and then employ those estimate of the parameters is the study of stock returns and evaluate asset pricing for both the united Bank and Bank of North which their data were taken from the Iraq stock Exchange.
which showed the results to a preference MLE on MME based on the standard of comparison the average square e
... Show MoreSegmented regression consists of several sections separated by different points of membership, showing the heterogeneity arising from the process of separating the segments within the research sample. This research is concerned with estimating the location of the change point between segments and estimating model parameters, and proposing a robust estimation method and compare it with some other methods that used in the segmented regression. One of the traditional methods (Muggeo method) has been used to find the maximum likelihood estimator in an iterative approach for the model and the change point as well. Moreover, a robust estimation method (IRW method) has used which depends on the use of the robust M-estimator technique in
... Show MoreIn this research, the one of the most important model and widely used in many and applications is linear mixed model, which widely used to analysis the longitudinal data that characterized by the repeated measures form .where estimating linear mixed model by using two methods (parametric and nonparametric) and used to estimate the conditional mean and marginal mean in linear mixed model ,A comparison between number of models is made to get the best model that will represent the mean wind speed in Iraq.The application is concerned with 8 meteorological stations in Iraq that we selected randomly and then we take a monthly data about wind speed over ten years Then average it over each month in corresponding year, so we g
... Show MoreThe main focus of this research is to examine the Travelling Salesman Problem (TSP) and the methods used to solve this problem where this problem is considered as one of the combinatorial optimization problems which met wide publicity and attention from the researches for to it's simple formulation and important applications and engagement to the rest of combinatorial problems , which is based on finding the optimal path through known number of cities where the salesman visits each city only once before returning to the city of departure n this research , the benefits of( FMOLP) algorithm is employed as one of the best methods to solve the (TSP) problem and the application of the algorithm in conjun
... Show MoreThe support vector machine, also known as SVM, is a type of supervised learning model that can be used for classification or regression depending on the datasets. SVM is used to classify data points by determining the best hyperplane between two or more groups. Working with enormous datasets, on the other hand, might result in a variety of issues, including inefficient accuracy and time-consuming. SVM was updated in this research by applying some non-linear kernel transformations, which are: linear, polynomial, radial basis, and multi-layer kernels. The non-linear SVM classification model was illustrated and summarized in an algorithm using kernel tricks. The proposed method was examined using three simulation datasets with different sample
... Show MoreIn this paper the modified trapezoidal rule is presented for solving Volterra linear Integral Equations (V.I.E) of the second kind and we noticed that this procedure is effective in solving the equations. Two examples are given with their comparison tables to answer the validity of the procedure.
Corpus linguistics is a methodology in studying language through corpus-based research. It differs from a traditional approach in studying a language (prescriptive approach) in its insistence on the systematic study of authentic examples of language in use (descriptive approach).A “corpus” is a large body of machine-readable structurally collected naturally occurring linguistic data, either written texts or a transcription of recorded speech, which can be used as a starting-point of linguistic description or as a means of verifying hypotheses about a language. In the past decade, interest has grown tremendously in the use of language corpora for language education. The ways in which corpora have been employed in language pedago
... Show More