Estimating the semantic similarity between short texts plays an increasingly prominent role in many fields related to text mining and natural language processing applications, especially with the large increase in the volume of textual data that is produced daily. Traditional approaches for calculating the degree of similarity between two texts, based on the words they share, do not perform well with short texts because two similar texts may be written in different terms by employing synonyms. As a result, short texts should be semantically compared. In this paper, a semantic similarity measurement method between texts is presented which combines knowledge-based and corpus-based semantic information to build a semantic network that represents the relationship between the compared texts and extracts the degree of similarity between them. Representing a text as a semantic network is the best knowledge representation that comes close to the human mind's understanding of the texts, where the semantic network reflects the sentence's semantic, syntactical, and structural knowledge. The network representation is a visual representation of knowledge objects, their qualities, and their relationships. WordNet lexical database has been used as a knowledge-based source while the GloVe pre-trained word embedding vectors have been used as a corpus-based source. The proposed method was tested using three different datasets, DSCS, SICK, and MOHLER datasets. A good result has been obtained in terms of RMSE and MAE.
The aim of the research is to examine the multiple intelligence test item selection based on Howard Gardner's MI model using the Generalized Partial Estimation Form, generalized intelligence. The researcher adopted the scale of multiple intelligences by Kardner, it consists of (102) items with eight sub-scales. The sample consisted of (550) students from Baghdad universities, Technology University, al-Mustansiriyah university, and Iraqi University for the academic year (2019/2020). It was verified assumptions theory response to a single (one-dimensional, local autonomy, the curve of individual characteristics, speed factor and application), and analysis of the data according to specimen partial appreciation of the generalized, and limits
... Show MoreWhen scheduling rules become incapable to tackle the presence of a variety of unexpected disruptions frequently occurred in manufacturing systems, it is necessary to develop a reactive schedule which can absorb the effects of such disruptions. Such responding requires efficient strategies, policies, and methods to controlling production & maintaining high shop performance. This can be achieved through rescheduling task which defined as an essential operating function to efficiently tackle and response to uncertainties and unexpected events. The framework proposed in this study consists of rescheduling approaches, strategies, policies, and techniques, which represents a guideline for most manufacturing companies operatin
... Show MoreThis study seeks to highlights on the behavioral approach in organization theory as modern and effective entrance in constructing this theory and reflection extent on the behavior of both the product and the information user (accountant and financial information).
The study also focus on behavioral approach role in consolidating accounting concepts through making harmony between them so that the accountant can influence the user behavior with the concepts and principles of accounting in an effort to provide quality characteristic of accounting information produced by him in consistent with his behavior and information user and its impact on the decision making process by the latter.
... Show MoreThe main objective of this research is to use the methods of calculus ???????? solving integral equations Altbataah When McCann slowdown is a function of time as the integral equation used in this research is a kind of Volterra
In this article, a new efficient approach is presented to solve a type of partial differential equations, such (2+1)-dimensional differential equations non-linear, and nonhomogeneous. The procedure of the new approach is suggested to solve important types of differential equations and get accurate analytic solutions i.e., exact solutions. The effectiveness of the suggested approach based on its properties compared with other approaches has been used to solve this type of differential equations such as the Adomain decomposition method, homotopy perturbation method, homotopy analysis method, and variation iteration method. The advantage of the present method has been illustrated by some examples.
Neurolinguistics is a new science, which studies the close relationship between language and neuroscience, and this new interdisciplinary field confirms the functional integration between language and the nervous system, that is, the movement of linguistic information in the brain in receiving, acquiring and producing to achieve linguistic communication; Because language is in fact a mental process that takes place only through the nervous system, and this research shows the benefit of each of these two fields to the other, and this science includes important topics, including: language acquisition, the linguistic abilities of the two hemispheres of the brain, the linguistic responsibility of the brain centers, and the time limit for langua
... Show MoreThis paper aims to evaluate the reliability analysis for steel beam which represented by the probability of Failure and reliability index. Monte Carlo Simulation Method (MCSM) and First Order Reliability Method (FORM) will be used to achieve this issue. These methods need two samples for each behavior that want to study; the first sample for resistance (carrying capacity R), and second for load effect (Q) which are parameters for a limit state function. Monte Carlo method has been adopted to generate these samples dependent on the randomness and uncertainties in variables. The variables that consider are beam cross-section dimensions, material property, beam length, yield stress, and applied loads. Matlab software has be
... Show MoreAbstract: Word sense disambiguation (WSD) is a significant field in computational linguistics as it is indispensable for many language understanding applications. Automatic processing of documents is made difficult because of the fact that many of the terms it contain ambiguous. Word Sense Disambiguation (WSD) systems try to solve these ambiguities and find the correct meaning. Genetic algorithms can be active to resolve this problem since they have been effectively applied for many optimization problems. In this paper, genetic algorithms proposed to solve the word sense disambiguation problem that can automatically select the intended meaning of a word in context without any additional resource. The proposed algorithm is evaluated on a col
... Show MoreOpenStreetMap (OSM), recognised for its current and readily accessible spatial database, frequently serves regions lacking precise data at the necessary granularity. Global collaboration among OSM contributors presents challenges to data quality and uniformity, exacerbated by the sheer volume of input and indistinct data annotation protocols. This study presents a methodological improvement in the spatial accuracy of OSM datasets centred over Baghdad, Iraq, utilising data derived from OSM services and satellite imagery. An analytical focus was placed on two geometric correction methods: a two-dimensional polynomial affine transformation and a two-dimensional polynomial conformal transformation. The former involves twelve coefficients for ad
... Show More