Estimating the semantic similarity between short texts plays an increasingly prominent role in many fields related to text mining and natural language processing applications, especially with the large increase in the volume of textual data that is produced daily. Traditional approaches for calculating the degree of similarity between two texts, based on the words they share, do not perform well with short texts because two similar texts may be written in different terms by employing synonyms. As a result, short texts should be semantically compared. In this paper, a semantic similarity measurement method between texts is presented which combines knowledge-based and corpus-based semantic information to build a semantic network that represents the relationship between the compared texts and extracts the degree of similarity between them. Representing a text as a semantic network is the best knowledge representation that comes close to the human mind's understanding of the texts, where the semantic network reflects the sentence's semantic, syntactical, and structural knowledge. The network representation is a visual representation of knowledge objects, their qualities, and their relationships. WordNet lexical database has been used as a knowledge-based source while the GloVe pre-trained word embedding vectors have been used as a corpus-based source. The proposed method was tested using three different datasets, DSCS, SICK, and MOHLER datasets. A good result has been obtained in terms of RMSE and MAE.
The research aims to know the effectiveness of a training program based on multiple intelligence theory in developing literary thinking among students of the Arabic Language Department at Ibn Rushd School of Humanities and to achieve the goal of research, the Safaris Research Institute, and the research community of Arabic language students in the Faculty of Education the third section of Arabic Language: The research sample consists of (71) students. Divided into (35) students in the experimental group and (36) students in the control group, the researcher balanced between the two groups with variables (intelligence, testing of tribal literary thinking, and time age in months), and after using the T-test for two independent samples, the
... Show MoreThe plethora of the emerged radio frequency applications makes the frequency spectrum crowded by many applications and hence the ability to detect specific application’s frequency without distortion is a difficult task to achieve.
The goal is to achieve a method to mitigate the highest interferer power in the frequency spectrum in order to eliminate the distortion.
This paper presents the application of the proposed tunable 6th-order notch filter on Ultra-Wideband (UWB) Complementary Metal-Oxide-Semiconductor (CMOS) Low Noise
The aim of the current study was to develop a nanostructured double-layer for hydrophobic molecules delivery system. The developed double-layer consisted of polyethylene glycol-based polymeric (PEG) followed by gelatin sub coating of the core hydrophobic molecules containing sodium citrate. The polymeric composition ratio of PEG and the amount of the sub coating gelatin were optimized using the two-level fractional method. The nanoparticles were characterized using AFM and FT-IR techniques. The size of these nano capsules was in the range of 39-76 nm depending on drug loading concentration. The drug was effectively loaded into PEG-Gelatin nanoparticles (≈47%). The hydrophobic molecules-release characteristics in terms of controlled-releas
... Show MoreDeepFake is a concern for celebrities and everyone because it is simple to create. DeepFake images, especially high-quality ones, are difficult to detect using people, local descriptors, and current approaches. On the other hand, video manipulation detection is more accessible than an image, which many state-of-the-art systems offer. Moreover, the detection of video manipulation depends entirely on its detection through images. Many worked on DeepFake detection in images, but they had complex mathematical calculations in preprocessing steps, and many limitations, including that the face must be in front, the eyes have to be open, and the mouth should be open with the appearance of teeth, etc. Also, the accuracy of their counterfeit detectio
... Show MoreThe research utilizes data produced by the Local Urban Management Directorate in Najaf and the imagery data from the Landsat 9 satellite, after being processed by the GIS tool. The research follows a descriptive and analytical approach; we integrated the Markov chain analysis and the cellular automation approach to predict transformations in city structure as a result of changes in land utilization. The research also aims to identify approaches to detect post-classification transformations in order to determine changes in land utilization. To predict the future land utilization in the city of Kufa, and to evaluate data accuracy, we used the Kappa Indicator to determine the potential applicability of the probability matrix that resulted from
... Show MoreMM ABDUL-WAHHAB, SA AHMED, International Journal of Pharmaceutical Research, 2020 - Cited by 2
It is found in the book "Ibn Aqeel: Alfiya Ibn Malek" that there are some linqustical aspected are related to the native tribal speakers like Tamim or Tie or some others. Sometimes in the book he said "some Arabian said without mentioning the name of the tribe.
As weel, he hasn’t mentioned the accent but he does mention the language. In the book, he has brought back the most important and the biqqest Arabian tribes suchas tribes of Hegaz, Tamim, Hatheyal, son of Anber, Tie, Rabia Bin Wael, Bani Katham, Au there, Bani AL Harth, Bani Kalb, Bani Hgim, Zabid, Hamedan, Alia Qais, Bani Ameer and many others. However, the most mentioned tribes were Hegaz and Tami.
Hence, the importance of the book expiain Ibn Aqeel by mentioning these A