Active learning is a teaching method that involves students actively participating in activities, exercises, and projects within a rich and diverse educational environment. The teacher plays a role in encouraging students to take responsibility for their own education under their scientific and pedagogical supervision and motivates them to achieve ambitious educational goals that focus on developing an integrated personality for today’s students and tomorrow’s leaders. It is important to understand the impact of two proposed strategies based on active learning on the academic performance of first-class intermediate students in computer subjects and their social intelligence. The research sample was intentionally selected, consisting of 99 students. The experimental group comprised 33 students from division (B) who were taught according to the first proposed strategy, while the second experimental group, represented by division (A), and also consisted of 33 students. The control group, made up of 33 students from division (C), was taught using the usual method. Two tools have been prepared: an achievement test with 40 items and a measure of social intelligence consisting of 20 items. The research results indicated that the experimental groups, which utilized the first and second proposed strategies based on active learning, outperformed the control group. As a result, several conclusions, recommendations, and proposals were made.
Organizations adopt a number of procedures and instructions in their field of activities in order to aid their resources development and energies to serve their entrepreneurial orientations. This calls for preparing a range of mechanisms to mitigate the strictness and complexity of procedures. The ambiguity and severe complexity of procedures means acknowledging the loss in energy and this in turn impedes the hopes while in the same time weakens the enthusiasm in these organizations and an impedes the possibility to achieve continues innovation, thereby losing opportunities to the level of surrender to the risks and assuming them to be unconquered obstacles.
There
... Show MoreABSRTACT Background: Soft liner material is become important in dental prosthetic treatment. They are applied to the surface of the dentures to achieve more equal force distribution , reduce localized pressure and improve denture retention by engaging undercut . So the aim of the study is to evaluate the effect of different surface treatment by air-abrasion AL2O3 and laser treatment with CO2 laser on improving the shear bond strength of the denture liner to acrylic denture base material . Materials and methods: the 30 specimens of heat cured acrylic denture base material (high Impact acrylic )and heat cured soft liner (Vertex ,Nether Lands )were prepared for this study .They were designed and divided according to type of the s
... Show MoreIn the field of data security, the critical challenge of preserving sensitive information during its transmission through public channels takes centre stage. Steganography, a method employed to conceal data within various carrier objects such as text, can be proposed to address these security challenges. Text, owing to its extensive usage and constrained bandwidth, stands out as an optimal medium for this purpose. Despite the richness of the Arabic language in its linguistic features, only a small number of studies have explored Arabic text steganography. Arabic text, characterized by its distinctive script and linguistic features, has gained notable attention as a promising domain for steganographic ventures. Arabic text steganography harn
... Show MoreThis work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show MoreThe fingerprints are the more utilized biometric feature for person identification and verification. The fingerprint is easy to understand compare to another existing biometric type such as voice, face. It is capable to create a very high recognition rate for human recognition. In this paper the geometric rotation transform is applied on fingerprint image to obtain a new level of features to represent the finger characteristics and to use for personal identification; the local features are used for their ability to reflect the statistical behavior of fingerprint variation at fingerprint image. The proposed fingerprint system contains three main stages, they are: (i) preprocessing, (ii) feature extraction, and (iii) matching. The preprocessi
... Show MoreBackground: Obesity tends to appear in modern societies and constitutes a significant public health problem with an increased risk of cardiovascular diseases.
Objective: This study aims to determine the agreement between actual and perceived body image in the general population.
Methods: A descriptive cross-sectional study design was conducted with a sample size of 300. The data were collected from eight major populated areas of Northern district of Karachi Sindh with a period of six months (10th January 2020 to 21st June 2020). The Figure rating questionnaire scale (FRS) was applied to collect the demographic data and perception about body weight. Body mass index (BMI) used for ass
... Show MoreThere is various human biometrics used nowadays, one of the most important of these biometrics is the face. Many techniques have been suggested for face recognition, but they still face a variety of challenges for recognizing faces in images captured in the uncontrolled environment, and for real-life applications. Some of these challenges are pose variation, occlusion, facial expression, illumination, bad lighting, and image quality. New techniques are updating continuously. In this paper, the singular value decomposition is used to extract the features matrix for face recognition and classification. The input color image is converted into a grayscale image and then transformed into a local ternary pattern before splitting the image into
... Show MoreIn this paper, a handwritten digit classification system is proposed based on the Discrete Wavelet Transform and Spike Neural Network. The system consists of three stages. The first stage is for preprocessing the data and the second stage is for feature extraction, which is based on Discrete Wavelet Transform (DWT). The third stage is for classification and is based on a Spiking Neural Network (SNN). To evaluate the system, two standard databases are used: the MADBase database and the MNIST database. The proposed system achieved a high classification accuracy rate with 99.1% for the MADBase database and 99.9% for the MNIST database
This paper proposes a completion that can allow fracturing four zones in a single trip in the well called “Y” (for confidential reasons) of the field named “X” (for confidential reasons). The steps to design a well completion for multiple fracturing are first to select the best completion method then the required equipment and the materials that it is made of. After that, the completion schematic must be drawn by using Power Draw in this case, and the summary installation procedures explained. The data used to design the completion are the well trajectory, the reservoir data (including temperature, pressure and fluid properties), the production and injection strategy. The results suggest that multi-stage hydraulic fracturing can
... Show MoreThis study employs wavelet transforms to address the issue of boundary effects. Additionally, it utilizes probit transform techniques, which are based on probit functions, to estimate the copula density function. This estimation is dependent on the empirical distribution function of the variables. The density is estimated within a transformed domain. Recent research indicates that the early implementations of this strategy may have been more efficient. Nevertheless, in this work, we implemented two novel methodologies utilizing probit transform and wavelet transform. We then proceeded to evaluate and contrast these methodologies using three specific criteria: root mean square error (RMSE), Akaike information criterion (AIC), and log
... Show More