This study aims to demonstrate the role of artificial intelligence and metaverse techniques, mainly logistical Regression, in reducing earnings management in Iraqi private banks. Synthetic intelligence approaches have shown the capability to detect irregularities in financial statements and mitigate the practice of earnings management. In contrast, many privately owned banks in Iraq historically relied on manual processes involving pen and paper for recording and posting financial information in their accounting records. However, the banking sector in Iraq has undergone technological advancements, leading to the Automation of most banking operations. Conventional audit techniques have become outdated due to factors such as the accuracy of data, cost savings, and the pace of business completion. Therefore, relying on auditing a large volume of financial data is insufficient. The Metaverse is a novel technological advancement seeking to fundamentally transform corporate operations and interpersonal interactions. Metaverse has implications for auditing and accounting practices, particularly concerning a company’s operational and financial aspects. Economic units have begun to switch from traditional methods of registration and posting to using software for financial operations to limit earnings management. Therefore, this research proposes applying one of the Data Mining techniques, namely the logistical regression technique, to reduce earning management in a sample of Iraqi private banks, including (11) banks. Accounting ratios were employed, followed by Logistic Regression, to achieve earnings management within the proportions.
The aim of the research is to examine the multiple intelligence test item selection based on Howard Gardner's MI model using the Generalized Partial Estimation Form, generalized intelligence. The researcher adopted the scale of multiple intelligences by Kardner, it consists of (102) items with eight sub-scales. The sample consisted of (550) students from Baghdad universities, Technology University, al-Mustansiriyah university, and Iraqi University for the academic year (2019/2020). It was verified assumptions theory response to a single (one-dimensional, local autonomy, the curve of individual characteristics, speed factor and application), and analysis of the data according to specimen partial appreciation of the generalized, and limits
... Show MoreThe importance of this research lies in shedding light on the concept of techno-strategy for information management from vital and important topics that showed response for change in all areas of life. As this necessitates the updating and changing of it in order to achieve its strategic goals and enhance its technological advantage. The research problem looked at the role of the information technology system (ITS) in enhancing risk management in general directorates for sports and school activity from the viewpoint of its department heads. The research aimed at the relationship of information techno-strategy in risk management and the ratios of the contribution of information techno-strategy in risk management from the viewpoint of heads o
... Show MoreThe automatic estimation of speaker characteristics, such as height, age, and gender, has various applications in forensics, surveillance, customer service, and many human-robot interaction applications. These applications are often required to produce a response promptly. This work proposes a novel approach to speaker profiling by combining filter bank initializations, such as continuous wavelets and gammatone filter banks, with one-dimensional (1D) convolutional neural networks (CNN) and residual blocks. The proposed end-to-end model goes from the raw waveform to an estimated height, age, and gender of the speaker by learning speaker representation directly from the audio signal without relying on handcrafted and pre-computed acou
... Show MoreThe assessment of data quality from different sources can be considered as a key challenge in supporting effective geospatial data integration and promoting collaboration in mapping projects. This paper presents a methodology for assessing positional and shape quality for authoritative large-scale data, such as Ordnance Survey (OS) UK data and General Directorate for Survey (GDS) Iraq data, and Volunteered Geographic Information (VGI), such as OpenStreetMap (OSM) data, with the intention of assessing possible integration. It is based on the measurement of discrepancies among the datasets, addressing positional accuracy and shape fidelity, using standard procedures and also directional statistics. Line feature comparison has been und
... Show MoreThis research proposes the application of the dragonfly and fruit fly algorithms to enhance estimates generated by the Fama-MacBeth model and compares their performance in this context for the first time. To specifically improve the dragonfly algorithm's effectiveness, three parameter tuning approaches are investigated: manual parameter tuning (MPT), adaptive tuning by methodology (ATY), and a novel technique called adaptive tuning by performance (APT). Additionally, the study evaluates the estimation performance using kernel weighted regression (KWR) and explores how the dragonfly and fruit fly algorithms can be employed to enhance KWR. All methods are tested using data from the Iraq Stock Exchange, based on the Fama-French three-f
... Show MoreDrag reduction (DR) techniques are used to improve the flow by spare the flow energy. The applications of DR are conduits in oil pipelines, oil well operations and flood water disposal, many techniques for drag reduction are used. One of these techniques is microbubbles. In this work, reduce of drag percent occurs by using a small bubbles of air pumped in the fluid transported. Gasoil is used as liquid transporting in the pipelines and air pumped as microbubbles. This study shows that the maximum value of drag reduction is 25.11%.
This study explores the challenges in Artificial Intelligence (AI) systems in generating image captions, a task that requires effective integration of computer vision and natural language processing techniques. A comparative analysis between traditional approaches such as retrieval- based methods and linguistic templates) and modern approaches based on deep learning such as encoder-decoder models, attention mechanisms, and transformers). Theoretical results show that modern models perform better for the accuracy and the ability to generate more complex descriptions, while traditional methods outperform speed and simplicity. The paper proposes a hybrid framework that combines the advantages of both approaches, where conventional methods prod
... Show More