Preferred Language
Articles
/
jeasiq-491
User (K-Means) for clustering in Data Mining with application
...Show More Authors

 

 

  The great scientific progress has led to widespread Information as information accumulates in large databases is important in trying to revise and compile this vast amount of data and, where its purpose to extract hidden information or classified data under their relations with each other in order to take advantage of them for technical purposes.

      And work with data mining (DM) is appropriate in this area because of the importance of research in the (K-Means) algorithm for clustering data in fact applied with effect can be observed in variables by changing the sample size (n) and the number of clusters (K) and their impact on the process of clustering in the algorithm.

Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Sat Jun 01 2019
Journal Name
Journal Of Economics And Administrative Sciences
Comparison of some methods for estimating the parameters of the binary logistic regression model using the genetic algorithm with practical application
...Show More Authors

Abstract

   Suffering the human because of pressure normal life of exposure to several types of heart disease as a result of due to different factors. Therefore, and in order to find out the case of a death whether or not, are to be modeled using binary logistic regression model

    In this research used, one of the most important models of nonlinear regression models extensive use in the modeling of applications statistical, in terms of heart disease which is the binary logistic regression model. and then estimating the parameters of this model using the statistical estimation methods, another problem will be appears in estimating its parameters, as well as when the numbe

... Show More
View Publication Preview PDF
Crossref
Publication Date
Fri Mar 01 2013
Journal Name
Journal Of Economics And Administrative Sciences
Robust Two-Step Estimation and Approximation Local Polynomial Kernel For Time-Varying Coefficient Model With Balance Longitudinal Data
...Show More Authors

      In this research, the nonparametric technique has been presented to estimate the time-varying coefficients functions for the longitudinal balanced data that characterized by observations obtained through (n) from the independent subjects, each one of them is measured repeatedly by group of  specific time points (m). Although the measurements are independent among the different subjects; they are mostly connected within each subject and the applied techniques is the Local Linear kernel LLPK technique. To avoid the problems of dimensionality, and thick computation, the two-steps method has been used to estimate the coefficients functions by using the two former technique. Since, the two-

... Show More
View Publication Preview PDF
Crossref
Publication Date
Mon Jun 01 2015
Journal Name
Journal Of Economics And Administrative Sciences
Constructing fuzzy linear programming model with practical application
...Show More Authors

This paper deals with constructing a model of fuzzy linear programming with application on fuels product of Dura- refinery , which consist of seven products that have direct effect ondaily consumption . After Building the model which consist of objective function represents the selling prices ofthe products and fuzzy productions constraints and fuzzy demand constraints addition to production requirements constraints , we used program of ( WIN QSB )  to find the optimal solution

View Publication Preview PDF
Crossref
Publication Date
Fri Aug 01 2014
Journal Name
Journal Of Economics And Administrative Sciences
Efficiency Measurement Model for Postgraduate Programs and Undergraduate Programs by Using Data Envelopment Analysis
...Show More Authors

Measuring the efficiency of postgraduate and undergraduate programs is one of the essential elements in educational process. In this study, colleges of Baghdad University and data for the academic year (2011-2012) have been chosen to measure the relative efficiencies of postgraduate and undergraduate programs in terms of their inputs and outputs. A relevant method to conduct the analysis of this data is Data Envelopment Analysis (DEA). The effect of academic staff to the number of enrolled and alumni students to the postgraduate and undergraduate programs are the main focus of the study.

 

View Publication Preview PDF
Crossref
Publication Date
Sun Jan 01 2023
Journal Name
Journal Of Intelligent Systems
A study on predicting crime rates through machine learning and data mining using text
...Show More Authors
Abstract<p>Crime is a threat to any nation’s security administration and jurisdiction. Therefore, crime analysis becomes increasingly important because it assigns the time and place based on the collected spatial and temporal data. However, old techniques, such as paperwork, investigative judges, and statistical analysis, are not efficient enough to predict the accurate time and location where the crime had taken place. But when machine learning and data mining methods were deployed in crime analysis, crime analysis and predication accuracy increased dramatically. In this study, various types of criminal analysis and prediction using several machine learning and data mining techniques, based o</p> ... Show More
View Publication
Scopus (9)
Crossref (2)
Scopus Clarivate Crossref
Publication Date
Wed Apr 01 2015
Journal Name
Journal Of Economics And Administrative Sciences
Multi-objectives probabilistic Aggregate production planning with practical application
...Show More Authors

In this research, has been to building a multi objective Stochastic Aggregate Production Planning model for General al Mansour company Data with Stochastic  demand under changing of market and uncertainty environment in aim to draw strong production plans.  The analysis to derive insights on management issues regular and extra labour costs and the costs of maintaining inventories and good policy choice under the influence medium and optimistic adoption of the model of random has adoption form and had adopted two objective functions total cost function (the core) and income and function for a random template priority compared with fixed forms with objective function and the results showed that the model of two phases wit

... Show More
View Publication Preview PDF
Crossref
Publication Date
Thu Aug 01 2019
Journal Name
Journal Of Economics And Administrative Sciences
Aggregate production planning using linear programming with practical application
...Show More Authors

Abstract :

The study aims at building a mathematical model for the aggregate production planning for Baghdad soft drinks company. The study is based on a set of aggregate planning strategies (Control of working hours, storage level control strategy) for the purpose of exploiting the resources and productive capacities available in an optimal manner and minimizing production costs by using (Matlab) program. The most important finding of the research is the importance of exploiting during the available time of production capacity. In the months when the demand is less than the production capacity available for investment. In the subsequent months when the demand exceeds the available energy and to minimize the use of overti

... Show More
View Publication Preview PDF
Crossref
Publication Date
Sun Feb 25 2024
Journal Name
Baghdad Science Journal
Optimizing Blockchain Consensus: Incorporating Trust Value in the Practical Byzantine Fault Tolerance Algorithm with Boneh-Lynn-Shacham Aggregate Signature
...Show More Authors

The consensus algorithm is the core mechanism of blockchain and is used to ensure data consistency among blockchain nodes. The PBFT consensus algorithm is widely used in alliance chains because it is resistant to Byzantine errors. However, the present PBFT (Practical Byzantine Fault Tolerance) still has issues with master node selection that is random and complicated communication. The IBFT consensus technique, which is enhanced, is proposed in this study and is based on node trust value and BLS (Boneh-Lynn-Shacham) aggregate signature. In IBFT, multi-level indicators are used to calculate the trust value of each node, and some nodes are selected to take part in network consensus as a result of this calculation. The master node is chosen

... Show More
View Publication Preview PDF
Scopus (1)
Scopus Crossref
Publication Date
Sun Oct 01 2017
Journal Name
Journal Of Economics And Administrative Sciences
Statistical testing mediation in structural equations models variables with practical application
...Show More Authors

Abstract:
       This study is studied one method of estimation and testing parameters mediating variables in a structural equations model SEM is causal steps method, in order to identify and know the variables that have indirect effects by estimating and testing mediation variables parameters by the above way and then applied to Iraq Women Integrated Social and Health Survey (I-WISH) for year 2011 from the Ministry of planning - Central statistical organization to identify if the  variables having the effect of mediation in the model by the step causal methods by using AMOS program V.23, it
was the independent variable X represents a phenomenon studied (cultural case of the

... Show More
View Publication Preview PDF
Crossref
Publication Date
Mon Dec 05 2022
Journal Name
Baghdad Science Journal
K-Nearest Neighbor Method with Principal Component Analysis for Functional Nonparametric Regression
...Show More Authors

This paper proposed a new  method to study functional non-parametric regression data analysis with conditional expectation in the case that the covariates  are functional and the Principal Component Analysis was utilized to de-correlate the multivariate response variables. It  utilized the formula of the Nadaraya Watson estimator (K-Nearest Neighbour (KNN)) for prediction with different types of the semi-metrics, (which are based on Second Derivative and Functional Principal Component Analysis (FPCA))  for measureing the closeness between curves.  Root Mean Square Errors is used for the  implementation of this model which is then compared to the independent response method. R program is used for analysing data. Then, when  the cov

... Show More
View Publication Preview PDF
Scopus (2)
Crossref (1)
Scopus Crossref