The current research aims to reveal the strength of education and the direction of the relationship between the formal thinking and learning methods of Kindergarten department students. To achieve this objective, the researcher developed a scale of formal thinking according to the theory of (Inhelder & Piaget 1958) consisting of (25) items in the form of declarative phrases derived from the analysis of formal thinking skills based on a professional situation that students are expected to interact with in a professional way. The research sample consisted of (100) female students selected randomly who were divided into four groups based on the academic stages, the results revealed that The level of formal thinking of the main sample is moderate. The sample was distributed among learning methods in different percentages and in the following descending order (convergent 30%, adaptive 27%, divergent 24%, absorptive 19%). There is a significant difference in terms of the academic levels in favor of the fourth stage. There is a weak negative correlation between the two variables. The research came out with a set of recommendations, including holding training workshops for teachers about the importance and detecting students’ preferred learning methods
In this paper, we investigate the automatic recognition of emotion in text. We perform experiments with a new method of classification based on the PPM character-based text compression scheme. These experiments involve both coarse-grained classification (whether a text is emotional or not) and also fine-grained classification such as recognising Ekman’s six basic emotions (Anger, Disgust, Fear, Happiness, Sadness, Surprise). Experimental results with three datasets show that the new method significantly outperforms the traditional word-based text classification methods. The results show that the PPM compression based classification method is able to distinguish between emotional and nonemotional text with high accuracy, between texts invo
... Show MoreSmishing is the delivery of phishing content to mobile users via a short message service (SMS). SMS allows cybercriminals to reach out to mobile end users in a new way, attempting to deliver phishing messages, mobile malware, and online scams that appear to be from a trusted brand. This paper proposes a new method for detecting smishing by combining two detection methods. The first method is uniform resource locators (URL) analysis, which employs a novel combination of the Google engine and VirusTotal. The second method involves examining SMS content to extract efficient features and classify messages as ham or smishing based on keywords contained within them using four well-known classifiers: support vector machine (SVM), random
... Show MoreFace detection is one of the important applications of biometric technology and image processing. Convolutional neural networks (CNN) have been successfully used with great results in the areas of image processing as well as pattern recognition. In the recent years, deep learning techniques specifically CNN techniques have achieved marvellous accuracy rates on face detection field. Therefore, this study provides a comprehensive analysis of face detection research and applications that use various CNN methods and algorithms. This paper presents ten of the most recent studies and illustrate the achieved performance of each method.
Correct grading of apple slices can help ensure quality and improve the marketability of the final product, which can impact the overall development of the apple slice industry post-harvest. The study intends to employ the convolutional neural network (CNN) architectures of ResNet-18 and DenseNet-201 and classical machine learning (ML) classifiers such as Wide Neural Networks (WNN), Naïve Bayes (NB), and two kernels of support vector machines (SVM) to classify apple slices into different hardness classes based on their RGB values. Our research data showed that the DenseNet-201 features classified by the SVM-Cubic kernel had the highest accuracy and lowest standard deviation (SD) among all the methods we tested, at 89.51 % 1.66 %. This
... Show MoreIn this article, a numerical study of compressible and weak compressible Newtonian flows is achieved for a time marching, Galerkin algorithm. A comparison between two numerical techniques for such flows, namely the artificial compressibility method (AC–method) and the fully artificial compressibility method (FAC–method) is performed. In the first artificial compressibility parameter ( is added to the continuity equation, while this parameter is added to both continuity and momentum equations in the second technique. This strategy is implemented to treat the governing equations of Newtonian flow in cylindrical coordinates (axisymmetric). Particularly, this study concerns with the effect of the artificial compressibility p
... Show MoreThe main objective of e-learning platforms is to offer a high quality instructing, training and educational services. This purpose would never be achieved without taking the students' motivation into consideration. Examining the voice, we can decide the emotional states of the learners after we apply the famous theory of psychologist SDT (Self Determination Theory). This article will investigate certain difficulties and challenges which face e-learner: the problem of leaving their courses and the student's isolation.
Utilizing Gussian blending model (GMM) so as to tackle and to solve the problems of classification, we can determine the learning abnormal status for e-learner. Our framework is going to increase the students’ moti
The growth of developments in machine learning, the image processing methods along with availability of the medical imaging data are taking a big increase in the utilization of machine learning strategies in the medical area. The utilization of neural networks, mainly, in recent days, the convolutional neural networks (CNN), have powerful descriptors for computer added diagnosis systems. Even so, there are several issues when work with medical images in which many of medical images possess a low-quality noise-to-signal (NSR) ratio compared to scenes obtained with a digital camera, that generally qualified a confusingly low spatial resolution and tends to make the contrast between different tissues of body are very low and it difficult to co
... Show More<span>Dust is a common cause of health risks and also a cause of climate change, one of the most threatening problems to humans. In the recent decade, climate change in Iraq, typified by increased droughts and deserts, has generated numerous environmental issues. This study forecasts dust in five central Iraqi districts using machine learning and five regression algorithm supervised learning system framework. It was assessed using an Iraqi meteorological organization and seismology (IMOS) dataset. Simulation results show that the gradient boosting regressor (GBR) has a mean square error of 8.345 and a total accuracy ratio of 91.65%. Moreover, the results show that the decision tree (DT), where the mean square error is 8.965, c
... Show MoreFeature selection, a method of dimensionality reduction, is nothing but collecting a range of appropriate feature subsets from the total number of features. In this paper, a point by point explanation review about the feature selection in this segment preferred affairs and its appraisal techniques are discussed. I will initiate my conversation with a straightforward approach so that we consider taking care of features and preferred issues depending upon meta-heuristic strategy. These techniques help in obtaining the best highlight subsets. Thereafter, this paper discusses some system models that drive naturally from the environment are discussed and calculations are performed so that we can take care of the prefe
... Show MoreThe stress – strength model is one of the models that are used to compute reliability. In this paper, we derived mathematical formulas for the reliability of the stress – strength model that follows Rayleigh Pareto (Rayl. – Par) distribution. Here, the model has a single component, where strength Y is subjected to a stress X, represented by moment, reliability function, restricted behavior, and ordering statistics. Some estimation methods were used, which are the maximum likelihood, ordinary least squares, and two shrinkage methods, in addition to a newly suggested method for weighting the contraction. The performance of these estimates was studied empirically by using simulation experimentation that could give more varieties for d
... Show More