In this study, we have created a new Arabic dataset annotated according to Ekman’s basic emotions (Anger, Disgust, Fear, Happiness, Sadness and Surprise). This dataset is composed from Facebook posts written in the Iraqi dialect. We evaluated the quality of this dataset using four external judges which resulted in an average inter-annotation agreement of 0.751. Then we explored six different supervised machine learning methods to test the new dataset. We used Weka standard classifiers ZeroR, J48, Naïve Bayes, Multinomial Naïve Bayes for Text, and SMO. We also used a further compression-based classifier called PPM not included in Weka. Our study reveals that the PPM classifier significantly outperforms other classifiers such as SVM and Naïve Bayes achieving the highest results in terms of accuracy, precision, recall, and F-measure.
With the quick grow of multimedia contents, from among this content, face recognition has got a lot of significant, specifically in latest little years. The face as object formed of various recognition characteristics for detect; so, it is still the most challenge research domain for researchers in area of image processing and computer vision. In this survey article, tried to solve the most demanding facial features like illuminations, aging, pose variation, partial occlusion and facial expression. Therefore, it indispensable factors in the system of facial recognition when performed on facial pictures. This paper study the most advanced facial detection techniques too, approaches: Hidden Markov Models, Principal Component Analysis (PCA)
... Show MoreThe Electrocardiogram records the heart's electrical signals. It is a practice; a painless diagnostic procedure used to rapidly diagnose and monitor heart problems. The ECG is an easy, noninvasive method for diagnosing various common heart conditions. Due to its unique advantages that other humans do not share, in addition to the fact that the heart's electrical activity may be easily detected from the body's surface, security is another area of concern. On this basis, it has become apparent that there are essential steps of pre-processing to deal with data of an electrical nature, signals, and prepare them for use in Biometric systems. Since it depends on the structure and function of the heart, it can be utilized as a biometric attribute
... Show MoreCorpus linguistics is a methodology in studying language through corpus-based research. It differs from a traditional approach in studying a language (prescriptive approach) in its insistence on the systematic study of authentic examples of language in use (descriptive approach).A “corpus” is a large body of machine-readable structurally collected naturally occurring linguistic data, either written texts or a transcription of recorded speech, which can be used as a starting-point of linguistic description or as a means of verifying hypotheses about a language. In the past decade, interest has grown tremendously in the use of language corpora for language education. The ways in which corpora have been employed in language pedago
... Show MoreTeen-Computer Interaction (TeenCI) stands in an infant phase and emerging in positive path. Compared to Human-Computer Interaction (generally dedicated to adult) and Child-Computer Interaction, TeenCI gets less interest in terms of research efforts and publications. This has revealed extensive prospects for researchers to explore and contribute in the region of computer design and evaluation for teen, in specific. As a subclass of HCI and a complementary for CCI, TeenCI that tolerates teen group, should be taken significant concern in the sense of its context, nature, development, characteristics and architecture. This paper tends to discover teen’s emotion contribution as the first attempt towards building a conceptual model for TeenC
... Show MoreAdministrative procedures in various organizations produce numerous crucial records and data. These
records and data are also used in other processes like customer relationship management and accounting
operations.It is incredibly challenging to use and extract valuable and meaningful information from these data
and records because they are frequently enormous and continuously growing in size and complexity.Data
mining is the act of sorting through large data sets to find patterns and relationships that might aid in the data
analysis process of resolving business issues. Using data mining techniques, enterprises can forecast future
trends and make better business decisions.The Apriori algorithm has bee
In this paper, we used four classification methods to classify objects and compareamong these methods, these are K Nearest Neighbor's (KNN), Stochastic Gradient Descentlearning (SGD), Logistic Regression Algorithm(LR), and Multi-Layer Perceptron (MLP). Weused MCOCO dataset for classification and detection the objects, these dataset image wererandomly divided into training and testing datasets at a ratio of 7:3, respectively. In randomlyselect training and testing dataset images, converted the color images to the gray level, thenenhancement these gray images using the histogram equalization method, resize (20 x 20) fordataset image. Principal component analysis (PCA) was used for feature extraction, andfinally apply four classification metho
... Show MorePermanent deformation in asphalt concrete pavements is pervasive distress [1], influenced by various factors such as environmental conditions, traffic loading, and mixture properties. A meticulous investigation into these factors has been conducted, yielding a robust dataset from uniaxial repeated load tests on 108 asphalt concrete samples. Each sample underwent systematic evaluation under varied test temperatures, loading conditions, and mixture properties, ensuring the data’s comprehensiveness and reliability. The materials used, sourced locally, were selected to enhance the study ʼs relevance to pavement constructions in hot climate areas, considering different asphalt cement grades and con- tents to understand material variability ef
... Show More