Preferred Language
Articles
/
jeasiq-1642
Accounting Mining Data Using Neural Networks (Case study)
...Show More Authors

Business organizations have faced many challenges in recent times, most important of which is information technology, because it is widely spread and easy to use. Its use has led to an increase in the amount of data that business organizations deal with an unprecedented manner. The amount of data available through the internet is a problem that many parties seek to find solutions for. Why is it available there in this huge amount randomly? Many expectations have revealed that in 2017, there will be devices connected to the internet estimated at three times the population of the Earth, and in 2015 more than one and a half billion gigabytes of data was transferred every minute globally. Thus, the so-called data mining emerged as a technique that aims at extracting knowledge from huge amounts of data, based on mathematical algorithms, which are the basis for data mining. They are derived from many sciences such as statistics, mathematics, logic, learning science, artificial intelligence, expert systems, form-recognition science, and other sciences, which are considered smart and non-traditional.

The problem of the research states that the steady increase in the amount of data, as well as the emergence of many current areas that require different data due to the contemporary environment of business organizations today, make information systems unable to meet the needs of these current organizations, and this applies exactly to accounting information systems as they are the main system in business organizations today. These systems have been designed to meet specific needs that make it impossible today to meet the different needs according to the contemporary environment of business organizations today, as well as failing to deal with the amount of data generated by the information technologies.

The research proposes two main hypotheses. First, the adoption of accounting data mining leads to providing data that the accounting information system was unable to provide before, as well as to shortening the time and effort required to obtain it. Second, the adoption of accounting exploration of data enables the adoption of artificial intelligence methods in processing such data to provide useful information to rationalize decisions.

The research leads to a number of conclusions, including that the steady increase in the amount of data in general, and the accounting data in particular, makes dealing with traditional frameworks a very difficult issue and leads to loss of time and effort during extracting information. In addition, the emergence of many current variables as a result of changes in the work environment requires the presence of technical tools, which have enough flexibility to deal with them. Moreover, data mining tools have the ability to derive relationships based on their existing databases that were not available before.

The research presents a number of recommendations, most important of which is the need to adopt the model presented by the research, i.e., Multilayer Perception, a network that exists within the (SPSS) program, which allows the possibility to use this network easily in rationalizing the decision to choose implemented projects in the provincial councils

Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Mon Jan 28 2019
Journal Name
Soft Computing
Bio-inspired multi-objective algorithms for connected set K-covers problem in wireless sensor networks
...Show More Authors

Scopus (12)
Crossref (12)
Scopus Clarivate Crossref
Publication Date
Sat Aug 01 2015
Journal Name
2015 Ieee Conference On Computational Intelligence In Bioinformatics And Computational Biology (cibcb)
Granular computing approach for the design of medical data classification systems
...Show More Authors

View Publication
Scopus (4)
Crossref (3)
Scopus Crossref
Publication Date
Sun Jun 01 2025
Journal Name
Al-khwarizmi Engineering Journal
Recent Tools of Software-Defined Networking Traffic Generation and Data Collection
...Show More Authors

أثبتت الشبكات المحددة بالبرمجيات (SDN) تفوقها في معالجة مشاكل الشبكة العادية مثل قابلية التوسع وخفة الحركة والأمن. تأتي هذه الميزة من SDN بسبب فصل مستوى التحكم عن مستوى البيانات. على الرغم من وجود العديد من الأوراق والدراسات التي تركز على إدارة SDN، والرصد، والتحكم، وتحسين QoS، إلا أن القليل منها يركز على تقديم ما يستخدمونه لتوليد حركة المرور وقياس أداء الشبكة. كما أن المؤلفات تفتقر إلى مقارنات بين الأدوات والأ

... Show More
View Publication
Crossref
Publication Date
Sat Mar 01 2008
Journal Name
Iraqi Journal Of Physics
Comparison between Different Data Image Compression Techniques Applied on SAR Images
...Show More Authors

In this paper, image compression technique is presented based on the Zonal transform method. The DCT, Walsh, and Hadamard transform techniques are also implements. These different transforms are applied on SAR images using Different block size. The effects of implementing these different transforms are investigated. The main shortcoming associated with this radar imagery system is the presence of the speckle noise, which affected the compression results.

View Publication Preview PDF
Publication Date
Sun Feb 10 2019
Journal Name
Journal Of The College Of Education For Women
IMPLEMENTATION OF THE SKIP LIST DATA STRUCTURE WITH IT'S UPDATE OPERATIONS
...Show More Authors

A skip list data structure is really just a simulation of a binary search tree. Skip lists algorithm are simpler, faster and use less space. this data structure conceptually uses parallel sorted linked lists. Searching in a skip list is more difficult than searching in a regular sorted linked list. Because a skip list is a two dimensional data structure, it is implemented using a two dimensional network of nodes with four pointers. the implementation of the search, insert and delete operation taking a time of upto . The skip list could be modified to implement the order statistic operations of RANKand SEARCH BY RANK while maintaining the same expected time. Keywords:skip list , parallel linked list , randomized algorithm , rank.

View Publication Preview PDF
Publication Date
Mon Feb 18 2019
Journal Name
Iraqi Journal Of Physics
Data visualization and distinct features extraction of the comet Ison 2013
...Show More Authors

The distribution of the intensity of the comet Ison C/2013 is studied by taking its histogram. This distribution reveals four distinct regions that related to the background, tail, coma and nucleus. One dimensional temperature distribution fitting is achieved by using two mathematical equations that related to the coordinate of the center of the comet. The quiver plot of the gradient of the comet shows very clearly that arrows headed towards the maximum intensity of the comet.

View Publication Preview PDF
Crossref
Publication Date
Wed Jun 01 2022
Journal Name
Bulletin Of Electrical Engineering And Informatics
Proposed model for data protection in information systems of government institutions
...Show More Authors

Information systems and data exchange between government institutions are growing rapidly around the world, and with it, the threats to information within government departments are growing. In recent years, research into the development and construction of secure information systems in government institutions seems to be very effective. Based on information system principles, this study proposes a model for providing and evaluating security for all of the departments of government institutions. The requirements of any information system begin with the organization's surroundings and objectives. Most prior techniques did not take into account the organizational component on which the information system runs, despite the relevance of

... Show More
View Publication
Scopus (2)
Scopus Crossref
Publication Date
Mon Jun 19 2023
Journal Name
Journal Of Engineering
A Multi-variables Multi -sites Model for Forecasting Hydrological Data Series
...Show More Authors

A multivariate multisite hydrological data forecasting model was derived and checked using a case study. The philosophy is to use simultaneously the cross-variable correlations, cross-site correlations and the time lag correlations. The case study is of two variables, three sites, the variables are the monthly rainfall and evaporation; the sites are Sulaimania, Dokan, and Darbandikhan.. The model form is similar to the first order auto regressive model, but in matrices form. A matrix for the different relative correlations mentioned above and another for their relative residuals were derived and used as the model parameters. A mathematical filter was used for both matrices to obtain the elements. The application of this model indicates i

... Show More
View Publication Preview PDF
Crossref
Publication Date
Sat Nov 02 2013
Journal Name
International Journal Of Computer Applications
Mixed Transforms Generated by Tensor Product and Applied in Data Processing
...Show More Authors

Finding orthogonal matrices in different sizes is very complex and important because it can be used in different applications like image processing and communications (eg CDMA and OFDM). In this paper we introduce a new method to find orthogonal matrices by using tensor products between two or more orthogonal matrices of real and imaginary numbers with applying it in images and communication signals processing. The output matrices will be orthogonal matrices too and the processing by our new method is very easy compared to other classical methods those use basic proofs. The results are normal and acceptable in communication signals and images but it needs more research works.

View Publication
Crossref
Publication Date
Sun Mar 15 2020
Journal Name
Journal Of The College Of Education For Women
Data-Driven Approach for Teaching Arabic as a Foreign Language: Eygpt
...Show More Authors

Corpus linguistics is a methodology in studying language through corpus-based research. It differs from a traditional approach in studying a language (prescriptive approach) in its insistence on the systematic study of authentic examples of language in use (descriptive approach).A “corpus” is a large body of machine-readable structurally collected naturally occurring linguistic data, either written texts or a transcription of recorded speech, which can be used as a starting-point of linguistic description or as a means of verifying hypotheses about a language.  In the past decade, interest has grown tremendously in the use of language corpora for language education. The ways in which corpora have been employed in language pedago

... Show More
View Publication Preview PDF