Preferred Language
Articles
/
jeasiq-569
: financial fraud ,Audit risks ,inherent risk ,Detection risk, Data Mining .
...Show More Authors

Abstract

The study seeks to use one of the techniques (Data mining) a (Logic regression) on the inherited risk through the use of style financial ratios technical analysis and then apply for financial fraud indicators,Since higher scandals exposed companies and the failure of the audit process has shocked the community and affected the integrity of the auditor and the reason is financial fraud practiced by the companies and not to the discovery of the fraud by the auditor, and this fraud involves intentional act aimed to achieve personal and harm the interests of to others, and doing (administration, staff) we can say that all frauds carried out through the presence of the motives and factors that help the fraudster to commit and these factors are (the opportunity, motivation, justification, ability) and often scammers with experience in concealing the fraud and the reason for this is weak internal control economic system of the unit, and that the process of detection of financial fraud is done through the availability of indications that.            

The research has come to the most important conclusions:

1-  The process of identifying and evaluating audit is important for the auditor's risk when planning the audit process or when determining what audit procedures for these risks is important and essential role in the process of making mistakes and fraud.

2-  Although the fraudulent can concealing fraud, but he could be detected through the use of a set of techniques that can provide indications about the whereabouts.

The most important recommendations:

1- The need to develop and train Audit stuff and to introduce definition fraud, causes, forms in order to put the investigation of fraud strategy.

2- The researchers recommend examining the causes of financial fraud in the case discovered in its various forms is possible to lead the auditor to determine the features of the person who carried out the fraud.

The researchers recommend using one of the techniques (Data Mining), a Logic Regression for test and Analyes financial ratios for the inherited risk that OBTAINED from banks' financial statements to determine the financial fraud within the existing indicators variables such ratios

Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Wed Sep 30 2020
Journal Name
مجلة كلية الادارة والاقتصاد للدراسات الاقتصادية والادارية والمالية - بابل
أثر المقدرة الجوهرية في تعزيز الأداء الاستراتيجي و تحقيق الميزة التنافسية في القطاع الصناعي : بحث تطبيقي في عينة من شركات القطاع الصناعي
...Show More Authors

Publication Date
Wed Oct 17 2018
Journal Name
Journal Of Economics And Administrative Sciences
the role of marketing intelligence in promoting new product policies.a sample survey of workers in a number of mineral water plants in dohuk governorate
...Show More Authors

Marketing Intelligence is one of the important methods of collecting information about competitors ' products and changes in customers ' tastes and needs that contribute to determining the policies to be followed in product development.

The problem of research, which seeks to be answered by the extent to which the companies in question have the appropriate and effective mechanisms to develop their products, and the nature of the relationship between the components of marketing intelligence and new product development policies. The importance of research is determined by the importance of obtaining important and necessary information to make the appropriate decision on the development of the new product an

... Show More
View Publication Preview PDF
Crossref
Publication Date
Sun Jan 01 2023
Journal Name
Petroleum And Coal
Analyzing of Production Data Using Combination of empirical Methods and Advanced Analytical Techniques
...Show More Authors

Scopus (1)
Scopus
Publication Date
Wed Jun 28 2023
Journal Name
Internet Technology Letters,
The Blockchain for Healthcare 4.0 Apply in Standard Secure Medical Data Processing Architecture
...Show More Authors

Cloud-based Electronic Health Records (EHRs) have seen a substantial increase in usage in recent years, especially for remote patient monitoring. Researchers are interested in investigating the use of Healthcare 4.0 in smart cities. This involves using Internet of Things (IoT) devices and cloud computing to remotely access medical processes. Healthcare 4.0 focuses on the systematic gathering, merging, transmission, sharing, and retention of medical information at regular intervals. Protecting the confidential and private information of patients presents several challenges in terms of thwarting illegal intrusion by hackers. Therefore, it is essential to prioritize the protection of patient medical data that is stored, accessed, and shared on

... Show More
View Publication
Scopus (5)
Crossref (5)
Scopus Clarivate Crossref
Publication Date
Thu Jun 01 2023
Journal Name
Bulletin Of Electrical Engineering And Informatics
A missing data imputation method based on salp swarm algorithm for diabetes disease
...Show More Authors

Most of the medical datasets suffer from missing data, due to the expense of some tests or human faults while recording these tests. This issue affects the performance of the machine learning models because the values of some features will be missing. Therefore, there is a need for a specific type of methods for imputing these missing data. In this research, the salp swarm algorithm (SSA) is used for generating and imputing the missing values in the pain in my ass (also known Pima) Indian diabetes disease (PIDD) dataset, the proposed algorithm is called (ISSA). The obtained results showed that the classification performance of three different classifiers which are support vector machine (SVM), K-nearest neighbour (KNN), and Naïve B

... Show More
View Publication
Scopus (6)
Crossref (1)
Scopus Crossref
Publication Date
Sun Apr 30 2023
Journal Name
Iraqi Geological Journal
Evaluating Machine Learning Techniques for Carbonate Formation Permeability Prediction Using Well Log Data
...Show More Authors

Machine learning has a significant advantage for many difficulties in the oil and gas industry, especially when it comes to resolving complex challenges in reservoir characterization. Permeability is one of the most difficult petrophysical parameters to predict using conventional logging techniques. Clarifications of the work flow methodology are presented alongside comprehensive models in this study. The purpose of this study is to provide a more robust technique for predicting permeability; previous studies on the Bazirgan field have attempted to do so, but their estimates have been vague, and the methods they give are obsolete and do not make any concessions to the real or rigid in order to solve the permeability computation. To

... Show More
View Publication
Scopus (14)
Crossref (6)
Scopus Crossref
Publication Date
Sat Aug 01 2015
Journal Name
Journal Of Engineering
Analytical Approach for Load Capacity of Large Diameter Bored Piles Using Field Data
...Show More Authors

An analytical approach based on field data was used to determine the strength capacity of large diameter bored type piles. Also the deformations and settlements were evaluated for both vertical and lateral loadings. The analytical predictions are compared to field data obtained from a proto-type test pile used at Tharthar –Tigris canal Bridge. They were found to be with acceptable agreement of 12% deviation.

               Following ASTM standards D1143M-07e1,2010, a test schedule of five loading cycles were proposed for vertical loads and series of cyclic loads to simulate horizontal loading .The load test results and analytical data of 1.95

... Show More
View Publication Preview PDF
Publication Date
Sun Mar 01 2015
Journal Name
Journal Of Engineering
Multi-Sites Multi-Variables Forecasting Model for Hydrological Data using Genetic Algorithm Modeling
...Show More Authors

A two time step stochastic multi-variables multi-sites hydrological data forecasting model was developed and verified using a case study. The philosophy of this model is to use the cross-variables correlations, cross-sites correlations and the two steps time lag correlations simultaneously, for estimating the parameters of the model which then are modified using the mutation process of the genetic algorithm optimization model. The objective function that to be minimized is the Akiake test value. The case study is of four variables and three sites. The variables are the monthly air temperature, humidity, precipitation, and evaporation; the sites are Sulaimania, Chwarta, and Penjwin, which are located north Iraq. The model performance was

... Show More
View Publication Preview PDF
Publication Date
Fri Jun 01 2007
Journal Name
Al-khwarizmi Engineering Journal
Correlation for fitting multicomponent vapor-liquid equilibria data and prediction of azeotropic behavior
...Show More Authors

Correlation equations for expressing the boiling temperature as direct function of liquid composition have been tested successfully and applied for predicting azeotropic behavior of multicomponent mixtures and the kind of azeotrope (minimum, maximum and saddle type) using modified correlation of Gibbs-Konovalov theorem. Also, the binary and ternary azeotropic point have been detected experimentally using graphical determination on the basis of experimental binary and ternary vapor-liquid equilibrium data.

            In this study, isobaric vapor-liquid equilibrium for two ternary systems: “1-Propanol – Hexane – Benzene” and its binaries “1-Propanol –

... Show More
View Publication Preview PDF
Publication Date
Sat Dec 30 2023
Journal Name
Journal Of Economics And Administrative Sciences
The Cluster Analysis by Using Nonparametric Cubic B-Spline Modeling for Longitudinal Data
...Show More Authors

Longitudinal data is becoming increasingly common, especially in the medical and economic fields, and various methods have been analyzed and developed to analyze this type of data.

In this research, the focus was on compiling and analyzing this data, as cluster analysis plays an important role in identifying and grouping co-expressed subfiles over time and employing them on the nonparametric smoothing cubic B-spline model, which is characterized by providing continuous first and second derivatives, resulting in a smoother curve with fewer abrupt changes in slope. It is also more flexible and can pick up on more complex patterns and fluctuations in the data.

The longitudinal balanced data profile was compiled into subgroup

... Show More
View Publication Preview PDF
Crossref