The current paper proposes a new estimator for the linear regression model parameters under Big Data circumstances. From the diversity of Big Data variables comes many challenges that can be interesting to the researchers who try their best to find new and novel methods to estimate the parameters of linear regression model. Data has been collected by Central Statistical Organization IRAQ, and the child labor in Iraq has been chosen as data. Child labor is the most vital phenomena that both society and education are suffering from and it affects the future of our next generation. Two methods have been selected to estimate the parameter of linear regression model, one Covariate at a Time Multiple Testing OCMT. Moreover, the Euclidian Distance has been used as a comparison criterion among the three methods
One of the globalization results is (VAT) , so it is important to consider its results and impact on the tax system.
The Iraqi economy having wittnisd an acute transition period , still in need for a better care in aim to back it, especialy from the tax system.
The research is concerned with the (VAT) and its details –And for five chapters all the problomes connect are displayed .But yet the new system does not comply with the modern development and this is one of the defects of (VAT) in Iraq.
The history of (VAT)was also studied mentiorned and studied.
As a conclusion (VAT) is one of the m
... Show Moreيعتبر الخزين من الامور الهامة في العديد من الشركات حيث يمثل نسبة 50 % من رأس مال المستثمر الكلي مع شدة الضغط المتمثل الى خفض التكاليف الكلية المتمثلة مع انواع اخرى من حالات عدم التأكد (الضبابية) لذا سوف نقدم في هذا البحث نظام اقتصادي للكميات الكلية ( الانتاج الاقتصادي للكميات) للوصول حجم الدفعة المثلى المضببة (FEOQ) عندما تكون كل المعالم في حالة عدم التأكد حيث يتم تحويلها الى فترة واحدة وبعد ذلك الحصول على حجم الد
... Show MoreThis study discussed a biased estimator of the Negative Binomial Regression model known as (Liu Estimator), This estimate was used to reduce variance and overcome the problem Multicollinearity between explanatory variables, Some estimates were used such as Ridge Regression and Maximum Likelihood Estimators, This research aims at the theoretical comparisons between the new estimator (Liu Estimator) and the estimators
Due to the lack of statistical researches in studying with existing (p) of Exogenous Input variables, and there contributed in time series phenomenon as a cause, yielding (q) of Output variables as a result in time series field, to form conceptual idea similar to the Classical Linear Regression that studies the relationship between dependent variable with explanatory variables. So highlight the importance of providing such research to a full analysis of this kind of phenomena important in consumer price inflation in Iraq. Were taken several variables influence and with a direct connection to the phenomenon and analyzed after treating the problem of outliers existence in the observations by (EM) approach, and expand the sample size (n=36) to
... Show MoreThe relationships between the related parties constitute a normal feature of trading and business processes. Entities may perform parts of their activities through subsidiary entities, joint ventures and associate entities. In these cases, the entity has the ability to influence the financial and operating policies of the investee through control, joint control or significant influence, So could affect established knowledge of transactions and balances outstanding, including commitments, and relationships with related to the evaluation of its operations by users of financial statements, including the risks and opportunities facing the entity assess the parties. So research has gained importance of the importance of the availability
... Show MoreThis article aims to explore the importance of estimating the a semiparametric regression function ,where we suggest a new estimator beside the other combined estimators and then we make a comparison among them by using simulation technique . Through the simulation results we find that the suggest estimator is the best with the first and second models ,wherealse for the third model we find Burman and Chaudhuri (B&C) is best.
Support vector machines (SVMs) are supervised learning models that analyze data for classification or regression. For classification, SVM is widely used by selecting an optimal hyperplane that separates two classes. SVM has very good accuracy and extremally robust comparing with some other classification methods such as logistics linear regression, random forest, k-nearest neighbor and naïve model. However, working with large datasets can cause many problems such as time-consuming and inefficient results. In this paper, the SVM has been modified by using a stochastic Gradient descent process. The modified method, stochastic gradient descent SVM (SGD-SVM), checked by using two simulation datasets. Since the classification of different ca
... Show MoreIn this research, we dealt with the study of the Non-Homogeneous Poisson process, which is one of the most important statistical issues that have a role in scientific development as it is related to accidents that occur in reality, which are modeled according to Poisson’s operations, because the occurrence of this accident is related to time, whether with the change of time or its stability. In our research, this clarifies the Non-Homogeneous hemispheric process and the use of one of these models of processes, which is an exponentiated - Weibull model that contains three parameters (α, β, σ) as a function to estimate the time rate of occurrence of earthquakes in Erbil Governorate, as the governorate is adjacent to two countr
... Show MoreThe purpose of this paper is applying the robustness in Linear programming(LP) to get rid of uncertainty problem in constraint parameters, and find the robust optimal solution, to maximize the profits of the general productive company of vegetable oils for the year 2019, through the modify on a mathematical model of linear programming when some parameters of the model have uncertain values, and being processed it using robust counterpart of linear programming to get robust results from the random changes that happen in uncertain values of the problem, assuming these values belong to the uncertainty set and selecting the values that cause the worst results and to depend buil
... Show More