Plagiarism is described as using someone else's ideas or work without their permission. Using lexical and semantic text similarity notions, this paper presents a plagiarism detection system for examining suspicious texts against available sources on the Web. The user can upload suspicious files in pdf or docx formats. The system will search three popular search engines for the source text (Google, Bing, and Yahoo) and try to identify the top five results for each search engine on the first retrieved page. The corpus is made up of the downloaded files and scraped web page text of the search engines' results. The corpus text and suspicious documents will then be encoded as vectors. For lexical plagiarism detection, the system will leverage Jaccard similarity and Term Frequency-Inverse Document Frequency (TFIDF) techniques, while for semantic plagiarism detection, Doc2Vec and Sentence Bidirectional Encoder Representations from Transformers (SBERT) intelligent text representation models will be used. Following that, the system compares the suspicious text to the corpus text. Finally, a generated plagiarism report will show the total plagiarism ratio, the plagiarism ratio from each source, and other details.
The present paper discusses one of the most important Russian linguistic features of Arabic origin Russian lexes denoting some religious worship or some political and social positions like Qadi, Wally, Sultan, Alam, Ruler, Caliph, Amir, Fakih, Mufti, Sharif, Ayatollah, Sheikh.. etc. A lexical analysis of the two of the most efficient and most used words of Arabic origin Russian lexes that are “Caliph and Sheikh” is considered in the present study. The lexicographic analysis of these words makes it possible to identify controversial issues related to their etymology and semantic development.
The study is conducted by the use of the modern Russian and Arabic dictionary, specifically, (Intermediate lexicon Dictionary
... Show MoreBackground: The problem of difficult gallbladder is not clearly defined and associated with real missing of therapeutic approaches that decreased morbidity. Moreover, the difficult gallbladder was reported as a contributing risk factor for biliary injury due to raised difficulty in surgical dissection within Calot’s triangle. The aim of this study is to determine the surgical outcomes of the open fundus-first cholecystectomy in lowering the rate of lethal intraoperative risks.
Subjects and Methods: Our prospective study conducted during the period of January 2019 to December 2022 at Ibn Sina specialized hospital, Khartoum, Sudan, for two hundred and fifty-three patients underw
... Show MoreThe problem of text recognition and its applicability as part of images captured in the wild has gained a significant attention from the computer vision community in recent years. In contrast to the recognition of printed documents, scene text recognition is a difficult problem. Contrary to recognition of printed documents, recognizing a scene text is a challenging problem. Many researches focus on the problem of recognizing text extracted from natural scene images. Significant attempts have been made to address this problem in recent past. However, many of these attempts work on utilizing availability of strong context, which naturally limits the dictionary. This paper presents a review of recent papers related to scene text
... Show MoreMedical image segmentation is a frequent processing step in image medical understanding and computer aided diagnosis. In this paper, development of range operator in image segmentation is proposed depending on dermatology infection. Three different block sizes have been utilized on the range operator and the developed ones to enhance the behavior of the segmentation process of medical images. To exploit the concept of range filtering, the extraction of the texture content of medical image is proposed. Experiment is conducted on different medical images and textures to prove the efficacy of our proposed filter was good results.
Breast cancer is the second deadliest disease infected women worldwide. For this
reason the early detection is one of the most essential stop to overcomeit dependingon
automatic devices like artificial intelligent. Medical applications of machine learning
algorithmsare mostly based on their ability to handle classification problems,
including classifications of illnesses or to estimate prognosis. Before machine
learningis applied for diagnosis, it must be trained first. The research methodology
which isdetermines differentofmachine learning algorithms,such as Random tree,
ID3, CART, SMO, C4.5 and Naive Bayesto finds the best training algorithm result.
The contribution of this research is test the data set with mis
The gaps and cracks in an image result from different reasons and affect the images. There are various methods concerning gaps replenishment along with serious efforts and proposed methodologies to eliminate cracks in diverse tendencies. In the current research work a color image white crack in-painting system has been introduced. The proposed inpainting system involved on two algorithms. They are Linear Gaps Filling (LGF) and the Circular Gaps Filling (CGF). The quality of output image depends on several effects such as: pixels tone, the number of pixels in the cracked area and neighborhood of cracked area and the resolution the image. The quality of the output images of two methods (linear method: average Peak Signal to Noise Ratio (PS
... Show MoreActive worms have posed a major security threat to the Internet, and many research efforts have focused on them. This paper is interested in internet worm that spreads via TCP, which accounts for the majority of internet traffic. It presents an approach that use a hybrid solution between two detection algorithms: behavior base detection and signature base detection to have the features of each of them. The aim of this study is to have a good solution of detecting worm and stealthy worm with the feature of the speed. This proposal was designed in distributed collaborative scheme based on the small-world network model to effectively improve the system performance.
Image classification can be defined as one of the most important tasks in the area of machine learning. Recently, deep neural networks, especially deep convolution networks, have participated greatly in end-to-end learning which reduce need for human designed features in the image recognition like Convolution Neural Network. It is offers the computation models which are made up of several processing layers for learning data representations with several abstraction levels. In this work, a pre-trained deep CNN is utilized according to some parameters like filter size, no of convolution, pooling, fully connected and type of activation function which includes 300 images for training and predict 100 image gender using probability measures. Re
... Show MoreIn this review paper, several research studies were surveyed to assist future researchers to identify available techniques in the field of infectious disease modeling across complex networks. Infectious disease modelling is becoming increasingly important because of the microbes and viruses that threaten people’s lives and societies in all respects. It has long been a focus of research in many domains, including mathematical biology, physics, computer science, engineering, economics, and the social sciences, to properly represent and analyze spreading processes. This survey first presents a brief overview of previous literature and some graphs and equations to clarify the modeling in complex networks, the detection of societie
... Show More