The current study aims to examine the level of problems faced by university students in distance learning, in addition to identify the differences in these problems in terms of the availability of internet services, gender, college, GPA, interactions, academic cohort, and family economic status. The study sample consisted of (3172) students (57.3% females). The researchers developed a questionnaire with (32) items to measure distance learning problems in four areas: Psychological (9 items), academic (10 items), technological (7 items), and study environment (6 items). The responses are scored on a (5) point Likert Scale ranging from 1 (strongly disagree) to 5 (strongly agree). Means, standard deviations, and Multivariate Analysis of Variance (MANOVA) were used to analyze the data. The findings showed that students faced high levels of psychological and academic problems and medium levels of technological and study environmental problems. The findings also indicated statistically significant differences in the levels of all problems based on the availability of internet services. In addition, the sample in scientific colleges manifested higher levels of academic problems, and females showed higher levels of study environmental problems. Statistically significant differences also appeared in all types of problems based on study cohort and family economic status.
The research seeks to clarify the problems related to the aspects of the financial and accounting process resulting from entering into contractual arrangements with a period of more than 20 years, among which is the research problem represented by the lack of clarity of the foundations and procedures for the recognition of oil costs and additional costs borne by foreign invested companies, which led to a weakening of their credibility and reflection. Negatively "on the measurement and accounting disclosure of financial reports prepared by oil companies, and the research aims to lay down sound procedures for measuring and classifying oil costs and additional costs paid to foreign companies, and recognizing and recording them in th
... Show MoreIn this paper, we design a fuzzy neural network to solve fuzzy singularly perturbed Volterra integro-differential equation by using a High Performance Training Algorithm such as the Levenberge-Marqaurdt (TrianLM) and the sigmoid function of the hidden units which is the hyperbolic tangent activation function. A fuzzy trial solution to fuzzy singularly perturbed Volterra integro-differential equation is written as a sum of two components. The first component meets the fuzzy requirements, however, it does not have any fuzzy adjustable parameters. The second component is a feed-forward fuzzy neural network with fuzzy adjustable parameters. The proposed method is compared with the analytical solutions. We find that the proposed meth
... Show MoreIn this paper, the necessary optimality conditions are studied and derived for a new class of the sum of two Caputo–Katugampola fractional derivatives of orders (α, ρ) and( β,ρ) with fixed the final boundary conditions. In the second study, the approximation of the left Caputo-Katugampola fractional derivative was obtained by using the shifted Chebyshev polynomials. We also use the Clenshaw and Curtis formula to approximate the integral from -1 to 1. Further, we find the critical points using the Rayleigh–Ritz method. The obtained approximation of the left fractional Caputo-Katugampola derivatives was added to the algorithm applied to the illustrative example so that we obtained the approximate results for the stat
... Show MoreX-ray diffractometers deliver the best quality diffraction data while being easy to use and adaptable to various applications. When X-ray photons strike electrons in materials, the incident photons scatter in a direction different from the incident beam; if the scattered beams do not change in wavelength, this is known as elastic scattering, which causes amplitude and intensity diffraction, leading to constructive interference. When the incident beam gives some of its energy to the electrons, the scattered beam's wavelength differs from the incident beam's wavelength, causing inelastic scattering, which leads to destructive interference and zero-intensity diffraction. In this study, The modified size-strain plot method was used to examin
... Show MoreBackground/Objectives: The purpose of current research aims to a modified image representation framework for Content-Based Image Retrieval (CBIR) through gray scale input image, Zernike Moments (ZMs) properties, Local Binary Pattern (LBP), Y Color Space, Slantlet Transform (SLT), and Discrete Wavelet Transform (DWT). Methods/Statistical analysis: This study surveyed and analysed three standard datasets WANG V1.0, WANG V2.0, and Caltech 101. The features an image of objects in this sets that belong to 101 classes-with approximately 40-800 images for every category. The suggested infrastructure within the study seeks to present a description and operationalization of the CBIR system through automated attribute extraction system premised on CN
... Show MoreIn education, exams are used to asses students’ acquired knowledge; however, the manual assessment of exams consumes a lot of teachers’ time and effort. In addition, educational institutions recently leaned toward distance education and e-learning due the Coronavirus pandemic. Thus, they needed to conduct exams electronically, which requires an automated assessment system. Although it is easy to develop an automated assessment system for objective questions. However, subjective questions require answers comprised of free text and are harder to automatically assess since grading them needs to semantically compare the students’ answers with the correct ones. In this paper, we present an automatic short answer grading metho
... Show MoreDeepfake is a type of artificial intelligence used to create convincing images, audio, and video hoaxes and it concerns celebrities and everyone because they are easy to manufacture. Deepfake are hard to recognize by people and current approaches, especially high-quality ones. As a defense against Deepfake techniques, various methods to detect Deepfake in images have been suggested. Most of them had limitations, like only working with one face in an image. The face has to be facing forward, with both eyes and the mouth open, depending on what part of the face they worked on. Other than that, a few focus on the impact of pre-processing steps on the detection accuracy of the models. This paper introduces a framework design focused on this asp
... Show More<p>Analyzing X-rays and computed tomography-scan (CT scan) images using a convolutional neural network (CNN) method is a very interesting subject, especially after coronavirus disease 2019 (COVID-19) pandemic. In this paper, a study is made on 423 patients’ CT scan images from Al-Kadhimiya (Madenat Al Emammain Al Kadhmain) hospital in Baghdad, Iraq, to diagnose if they have COVID or not using CNN. The total data being tested has 15000 CT-scan images chosen in a specific way to give a correct diagnosis. The activation function used in this research is the wavelet function, which differs from CNN activation functions. The convolutional wavelet neural network (CWNN) model proposed in this paper is compared with regular convol
... Show MoreRumors are typically described as remarks whose true value is unknown. A rumor on social media has the potential to spread erroneous information to a large group of individuals. Those false facts will influence decision-making in a variety of societies. In online social media, where enormous amounts of information are simply distributed over a large network of sources with unverified authority, detecting rumors is critical. This research proposes that rumor detection be done using Natural Language Processing (NLP) tools as well as six distinct Machine Learning (ML) methods (Nave Bayes (NB), random forest (RF), K-nearest neighbor (KNN), Logistic Regression (LR), Stochastic Gradient Descent (SGD) and Decision Tree (
... Show MoreA substantial portion of today’s multimedia data exists in the form of unstructured text. However, the unstructured nature of text poses a significant task in meeting users’ information requirements. Text classification (TC) has been extensively employed in text mining to facilitate multimedia data processing. However, accurately categorizing texts becomes challenging due to the increasing presence of non-informative features within the corpus. Several reviews on TC, encompassing various feature selection (FS) approaches to eliminate non-informative features, have been previously published. However, these reviews do not adequately cover the recently explored approaches to TC problem-solving utilizing FS, such as optimization techniques.
... Show More