In this paper, we investigate the automatic recognition of emotion in text. We perform experiments with a new method of classification based on the PPM character-based text compression scheme. These experiments involve both coarse-grained classification (whether a text is emotional or not) and also fine-grained classification such as recognising Ekman’s six basic emotions (Anger, Disgust, Fear, Happiness, Sadness, Surprise). Experimental results with three datasets show that the new method significantly outperforms the traditional word-based text classification methods. The results show that the PPM compression based classification method is able to distinguish between emotional and nonemotional text with high accuracy, between texts involving Happiness and Sadness emotions (with 80% accuracy for Aman’s dataset and 76.7% for Alm’s datasets) and texts involving Ekman’s six basic emotions for the LiveJournal dataset (87.8% accuracy). Results also show that the method outperforms traditional feature-based classifiers such as Naïve Bayes and SMO in most cases in terms of accuracy, precision, recall and F-measure.
In this study, an efficient compression system is introduced, it is based on using wavelet transform and two types of 3Dimension (3D) surface representations (i.e., Cubic Bezier Interpolation (CBI)) and 1 st order polynomial approximation. Each one is applied on different scales of the image; CBI is applied on the wide area of the image in order to prune the image components that show large scale variation, while the 1 st order polynomial is applied on the small area of residue component (i.e., after subtracting the cubic Bezier from the image) in order to prune the local smoothing components and getting better compression gain. Then, the produced cubic Bezier surface is subtracted from the image signal to get the residue component. Then, t
... Show MoreLongitudinal data is becoming increasingly common, especially in the medical and economic fields, and various methods have been analyzed and developed to analyze this type of data.
In this research, the focus was on compiling and analyzing this data, as cluster analysis plays an important role in identifying and grouping co-expressed subfiles over time and employing them on the nonparametric smoothing cubic B-spline model, which is characterized by providing continuous first and second derivatives, resulting in a smoother curve with fewer abrupt changes in slope. It is also more flexible and can pick up on more complex patterns and fluctuations in the data.
The longitudinal balanced data profile was compiled into subgroup
... Show MoreAbstract
The current research aims to reveal the extent to which all scoring rubrics data for the electronic work file conform to the partial estimation model according to the number of assumed dimensions. The study sample consisted of (356) female students. The study concluded that the list with the one-dimensional assumption is more appropriate than the multi-dimensional assumption, The current research recommends preparing unified correction rules for the different methods of performance evaluation in the basic courses. It also suggests the importance of conducting studies aimed at examining the appropriateness of different evaluation methods for models of response theory to the
... Show MoreMassive multiple-input multiple-output (massive-MIMO) is a promising technology for next generation wireless communications systems due to its capability to increase the data rate and meet the enormous ongoing data traffic explosion. However, in non-reciprocal channels, such as those encountered in frequency division duplex (FDD) systems, channel state information (CSI) estimation using downlink (DL) training sequence is to date very challenging issue, especially when the channel exhibits a shorter coherence time. In particular, the availability of sufficiently accurate CSI at the base transceiver station (BTS) allows an efficient precoding design in the DL transmission to be achieved, and thus, reliable communication systems can be obtaine
... Show MoreThe alternating direction implicit method (ADI) is a common classical numerical method that was first introduced to solve the heat equation in two or more spatial dimensions and can also be used to solve parabolic and elliptic partial differential equations as well. In this paper, We introduce an improvement to the alternating direction implicit (ADI) method to get an equivalent scheme to Crank-Nicolson differences scheme in two dimensions with the main feature of ADI method. The new scheme can be solved by similar ADI algorithm with some modifications. A numerical example was provided to support the theoretical results in the research.
The process of identifying the region is not an easy process when compared with other operations within the attribute or similarity. It is also not difficult if the process of identifying the region is based on the standard and standard indicators in its calculation. The latter requires the availability of numerical and relative data for the data of each case Any indicator or measure is included in the legal process
Purpose: To validate a UV-visible spectrophotometric technique for evaluating niclosamide (NIC) concentration in different media across various values of pH. Methods: NIC was investigated using a UV-visible spectrophotometer in acidic buffer solution (ABS) of pH 1.2, deionized water (DW), and phosphate buffer solution (PBS), pH 7.4. The characterization of NIC was done with differential scanning calorimeter (DSC), powder X-ray diffraction (XRD), and Fourier transform infrared spectroscopy (FTIR). The UV analysis was validated for accuracy, precision, linearity, and robustness. Results: The DSC spectra showed a single endothermic peak at 228.43 °C (corresponding to the melting point of NIC), while XRD and FTIR analysis confirmed the identit
... Show MoreEmpirical and statistical methodologies have been established to acquire accurate permeability identification and reservoir characterization, based on the rock type and reservoir performance. The identification of rock facies is usually done by either using core analysis to visually interpret lithofacies or indirectly based on well-log data. The use of well-log data for traditional facies prediction is characterized by uncertainties and can be time-consuming, particularly when working with large datasets. Thus, Machine Learning can be used to predict patterns more efficiently when applied to large data. Taking into account the electrofacies distribution, this work was conducted to predict permeability for the four wells, FH1, FH2, F
... Show MoreThe research utilizes data produced by the Local Urban Management Directorate in Najaf and the imagery data from the Landsat 9 satellite, after being processed by the GIS tool. The research follows a descriptive and analytical approach; we integrated the Markov chain analysis and the cellular automation approach to predict transformations in city structure as a result of changes in land utilization. The research also aims to identify approaches to detect post-classification transformations in order to determine changes in land utilization. To predict the future land utilization in the city of Kufa, and to evaluate data accuracy, we used the Kappa Indicator to determine the potential applicability of the probability matrix that resulted from
... Show More