Estimating the semantic similarity between short texts plays an increasingly prominent role in many fields related to text mining and natural language processing applications, especially with the large increase in the volume of textual data that is produced daily. Traditional approaches for calculating the degree of similarity between two texts, based on the words they share, do not perform well with short texts because two similar texts may be written in different terms by employing synonyms. As a result, short texts should be semantically compared. In this paper, a semantic similarity measurement method between texts is presented which combines knowledge-based and corpus-based semantic information to build a semantic network that represents the relationship between the compared texts and extracts the degree of similarity between them. Representing a text as a semantic network is the best knowledge representation that comes close to the human mind's understanding of the texts, where the semantic network reflects the sentence's semantic, syntactical, and structural knowledge. The network representation is a visual representation of knowledge objects, their qualities, and their relationships. WordNet lexical database has been used as a knowledge-based source while the GloVe pre-trained word embedding vectors have been used as a corpus-based source. The proposed method was tested using three different datasets, DSCS, SICK, and MOHLER datasets. A good result has been obtained in terms of RMSE and MAE.
In this research a proposed technique is used to enhance the frame difference technique performance for extracting moving objects in video file. One of the most effective factors in performance dropping is noise existence, which may cause incorrect moving objects identification. Therefore it was necessary to find a way to diminish this noise effect. Traditional Average and Median spatial filters can be used to handle such situations. But here in this work the focus is on utilizing spectral domain through using Fourier and Wavelet transformations in order to decrease this noise effect. Experiments and statistical features (Entropy, Standard deviation) proved that these transformations can stand to overcome such problems in an elegant way.
... Show MoreThis century is witnessing changes in various fields,which have become a challenge to the enterprises in the contemporary business environment, the most important of which is the importance of measurement and disclosure of the role of knowledge capital in the transition to the knowledge economy, which is no longer the land and labor and physical capital resources the basic. Knowledge-based capital has emerged that provides the enterprise with an area of excellence and enhances its position to achieve competitive advantage. The research tackled the concept, objectives, components and importance of knowledge capital, which is fundamental in knowledge management, creating value added and enhancing competitiveness, as well as an
... Show MoreThe research is concerned with studying the characteristics of Sustainable Architecture and Green Architecture, as a general research methodology related to the specific field of architecture, based on the differentiation between two generic concepts, Sustainability and Greening, to form the framework of the research specific methodology, where both concepts seem to be extremely overlapping for research centers, individuals, and relevant organizations. In this regard, the research tend towards searching their characteristics and to clearly differentiates between the two terms, particularly in architecture, where the research seeks understanding sustainable and green architectures, how they are so close or so far, and the
... Show MoreConversation analysis has long been the concern of many linguists who work in the field of discourse analysis. In spite of the fact that there are many researches have been done in the field of short stories but up to the researcher knowledge the investigation of the selected short stories has not been studied yet. Hence, this paper aims at answering the following questions: what are the features of children’s short stories language and the differences between short stories of four years old and those of six years old. Hence, the devices used by the story tellers in reciting the short stories should be observed. Thus, the researcher has consulted the models presented by Johnson and Fillmore (2010) to show tenses and sentence str
... Show MoreCrime is considered as an unlawful activity of all kinds and it is punished by law. Crimes have an impact on a society's quality of life and economic development. With a large rise in crime globally, there is a necessity to analyze crime data to bring down the rate of crime. This encourages the police and people to occupy the required measures and more effectively restricting the crimes. The purpose of this research is to develop predictive models that can aid in crime pattern analysis and thus support the Boston department's crime prevention efforts. The geographical location factor has been adopted in our model, and this is due to its being an influential factor in several situations, whether it is traveling to a specific area or livin
... Show MoreThe continuous advancement in the use of the IoT has greatly transformed industries, though at the same time it has made the IoT network vulnerable to highly advanced cybercrimes. There are several limitations with traditional security measures for IoT; the protection of distributed and adaptive IoT systems requires new approaches. This research presents novel threat intelligence for IoT networks based on deep learning, which maintains compliance with IEEE standards. Interweaving artificial intelligence with standardization frameworks is the goal of the study and, thus, improves the identification, protection, and reduction of cyber threats impacting IoT environments. The study is systematic and begins by examining IoT-specific thre
... Show MoreIn this paper, we investigate the automatic recognition of emotion in text. We perform experiments with a new method of classification based on the PPM character-based text compression scheme. These experiments involve both coarse-grained classification (whether a text is emotional or not) and also fine-grained classification such as recognising Ekman’s six basic emotions (Anger, Disgust, Fear, Happiness, Sadness, Surprise). Experimental results with three datasets show that the new method significantly outperforms the traditional word-based text classification methods. The results show that the PPM compression based classification method is able to distinguish between emotional and nonemotional text with high accuracy, between texts invo
... Show MoreResearch Summary
It highlights the importance of assessing the demand for money function in Iraq through the understanding of the relationship between him and affecting the variables by searching the stability of this function and the extent of their influence in the Iraqi dinar exchange rate in order to know the amount of their contribution to the monetary policies of the Iraqi economy fee, as well as through study behavior of the demand for money function in Iraq and analyze the determinants of the demand for money for the period 1991-2013 and the impact of these determinants in the demand for money in Iraq.
And that the problem that we face is how to estimate the total demand for money in
... Show MoreText categorization refers to the process of grouping text or documents into classes or categories according to their content. Text categorization process consists of three phases which are: preprocessing, feature extraction and classification. In comparison to the English language, just few studies have been done to categorize and classify the Arabic language. For a variety of applications, such as text classification and clustering, Arabic text representation is a difficult task because Arabic language is noted for its richness, diversity, and complicated morphology. This paper presents a comprehensive analysis and a comparison for researchers in the last five years based on the dataset, year, algorithms and the accuracy th
... Show More