Preferred Language
Articles
/
wRhN6ZcBVTCNdQwCOagc
Comparison of some artificial neural networks for graduate students
...Show More Authors

Artificial Neural Networks (ANN) is one of the important statistical methods that are widely used in a range of applications in various fields, which simulates the work of the human brain in terms of receiving a signal, processing data in a human cell and sending to the next cell. It is a system consisting of a number of modules (layers) linked together (input, hidden, output). A comparison was made between three types of neural networks (Feed Forward Neural Network (FFNN), Back propagation network (BPL), Recurrent Neural Network (RNN). he study found that the lowest false prediction rate was for the recurrentt network architecture and using the Data on graduate students at the College of Administration and Economics, University of Baghdad for the period from 2014-2015 to The academic year 2017-2018. The variables are use in the research is (student’s success, age, gender, job, type of study (higher diploma, master’s, doctorate), specialization (statistics, economics, accounting, industry management, administrative management, and public administration) and channel acceptance). It became clear that the best variables that affect the success of graduate students are the type of study, age and job.                                                                                     

Crossref
View Publication
Publication Date
Mon Jun 17 2019
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Dynamic Channel Assignment Using Neural Networks
...Show More Authors

This paper presents a proposed neural network algorithm to solve the shortest path problem (SPP) for communication routing. The solution extends the traditional recurrent Hopfield architecture introducing the optimal routing for any request by choosing single and multi link path node-to-node traffic to minimize the loss. This suggested neural network algorithm implemented by using 20-nodes network example. The result shows that a clear convergence can be achieved by 95% valid convergence (about 361 optimal routes from 380-pairs). Additionally computation performance is also mentioned at the expense of slightly worse results.

View Publication Preview PDF
Publication Date
Sat Dec 02 2017
Journal Name
Al-khwarizmi Engineering Journal
Direction Finding Using GHA Neural Networks
...Show More Authors

 This paper adapted the neural network for the estimating of the direction of arrival (DOA). It uses an unsupervised adaptive neural network with GHA algorithm to extract the principal components that in turn, are used by Capon method to estimate the DOA, where by the PCA neural network we take signal subspace only and use it in Capon (i.e. we will ignore the noise subspace, and take the signal subspace only).

 

 

View Publication Preview PDF
Publication Date
Mon Dec 01 2008
Journal Name
Journal Of Economics And Administrative Sciences
Neural Networks as a Discriminant Purposes
...Show More Authors

Discriminant between groups is one of the common procedures because of its ability to analyze many practical phenomena, and there are several methods can be used for this purpose, such as linear and quadratic discriminant functions. recently, neural networks is used as a tool to distinguish between groups.

In this paper the simulation is used to compare neural networks and classical method for classify observations to group that is belong to, in case of some variables that don’t follow the normal distribution. we use the proportion of number of misclassification observations to the all observations as a criterion of comparison.  

 

 

View Publication Preview PDF
Crossref
Publication Date
Wed Mar 10 2021
Journal Name
Baghdad Science Journal
On Training Of Feed Forward Neural Networks
...Show More Authors

In this paper we describe several different training algorithms for feed forward neural networks(FFNN). In all of these algorithms we use the gradient of the performance function, energy function, to determine how to adjust the weights such that the performance function is minimized, where the back propagation algorithm has been used to increase the speed of training. The above algorithms have a variety of different computation and thus different type of form of search direction and storage requirements, however non of the above algorithms has a global properties which suited to all problems.

View Publication Preview PDF
Publication Date
Tue Oct 20 2020
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Comparison of Artificial Neural Network and Box- Jenkins Models to Predict the Number of Patients with Hypertension in Kalar
...Show More Authors

    Artificial Neural Network (ANN) is widely used in many complex applications. Artificial neural network is a statistical intelligent technique resembling the characteristic of the human neural network.  The prediction of time series from the important topics in statistical sciences to assist administrations in the planning and make the accurate decisions, so the aim of this study is to analysis the monthly hypertension in Kalar for the period (January 2011- June 2018) by applying an autoregressive –integrated- moving average model  and artificial neural networks and choose the best and most efficient model for patients with hypertension in Kalar through the comparison between neural networks and Box- Je

... Show More
View Publication Preview PDF
Crossref
Publication Date
Wed Jan 13 2016
Journal Name
University Of Baghdad
Employ Mathematical Model and Neural Networks for Determining Rate Environmental Contamination
...Show More Authors

Preview PDF
Publication Date
Mon Jun 30 2008
Journal Name
Iraqi Journal Of Science
On the Greedy Ridge Function Neural Networks for Approximation Multidimensional Functions
...Show More Authors

The aim of this paper is to approximate multidimensional functions f∈C(R^s) by developing a new type of Feedforward neural networks (FFNS) which we called it Greedy ridge function neural networks (GRGFNNS). Also, we introduce a modification to the greedy algorithm which is used to train the greedy ridge function neural networks. An error bound are introduced in Sobolov space. Finally, a comparison was made between the three algorithms (modified greedy algorithm, Backpropagation algorithm and the result in [1]).

Preview PDF
Publication Date
Sun Sep 30 2012
Journal Name
Iraqi Journal Of Chemical And Petroleum Engineering
Development of PVT Correlation for Iraqi Crude Oils Using Artificial Neural Network
...Show More Authors

Several correlations have been proposed for bubble point pressure, however, the correlations could not predict bubble point pressure accurately over the wide range of operating conditions. This study presents Artificial Neural Network (ANN) model for predicting the bubble point pressure especially for oil fields in Iraq. The most affecting parameters were used as the input layer to the network. Those were reservoir temperature, oil gravity, solution gas-oil ratio and gas relative density. The model was developed using 104 real data points collected from Iraqi reservoirs. The data was divided into two groups: the first was used to train the ANN model, and the second was used to test the model to evaluate their accuracy and trend stability

... Show More
View Publication Preview PDF
Publication Date
Wed Jan 01 2020
Journal Name
International Journal Of Computational Intelligence Systems
Evolutionary Feature Optimization for Plant Leaf Disease Detection by Deep Neural Networks
...Show More Authors

View Publication
Scopus (48)
Crossref (46)
Scopus Clarivate Crossref
Publication Date
Fri Jun 01 2007
Journal Name
Journal Of Al-nahrain University Science
ON THE GREEDY RADIAL BASIS FUNCTION NEURAL NETWORKS FOR APPROXIMATION MULTIDIMENSIONAL FUNCTIONS
...Show More Authors

The aim of this paper is to approximate multidimensional functions by using the type of Feedforward neural networks (FFNNs) which is called Greedy radial basis function neural networks (GRBFNNs). Also, we introduce a modification to the greedy algorithm which is used to train the greedy radial basis function neural networks. An error bound are introduced in Sobolev space. Finally, a comparison was made between the three algorithms (modified greedy algorithm, Backpropagation algorithm and the result is published in [16]).

View Publication Preview PDF
Crossref (1)
Crossref