Plagiarism is described as using someone else's ideas or work without their permission. Using lexical and semantic text similarity notions, this paper presents a plagiarism detection system for examining suspicious texts against available sources on the Web. The user can upload suspicious files in pdf or docx formats. The system will search three popular search engines for the source text (Google, Bing, and Yahoo) and try to identify the top five results for each search engine on the first retrieved page. The corpus is made up of the downloaded files and scraped web page text of the search engines' results. The corpus text and suspicious documents will then be encoded as vectors. For lexical plagiarism detection, the system will leverage Jaccard similarity and Term Frequency-Inverse Document Frequency (TFIDF) techniques, while for semantic plagiarism detection, Doc2Vec and Sentence Bidirectional Encoder Representations from Transformers (SBERT) intelligent text representation models will be used. Following that, the system compares the suspicious text to the corpus text. Finally, a generated plagiarism report will show the total plagiarism ratio, the plagiarism ratio from each source, and other details.
Developed countries are facing many challenges to convert large areas of existing services to electronic modes, reflecting the current nature of workflow and the equipment utilized for achieving such services. For instance, electricity bill collection still tend to be based on traditional approaches (paper-based and relying on human interaction) making them comparatively time-consuming and prone to human error.
This research aims to recognize numbers in mechanical electricity meters and convert them to digital figures utilizing Optical Character Recognition (OCR) in Matlab. The research utilized the location of red region in color electricity meters image to determine the crop region that contain the meters numbers, then
... Show MoreHuman action recognition has gained popularity because of its wide applicability, such as in patient monitoring systems, surveillance systems, and a wide diversity of systems that contain interactions between people and electrical devices, including human computer interfaces. The proposed method includes sequential stages of object segmentation, feature extraction, action detection and then action recognition. Effective results of human actions using different features of unconstrained videos was a challenging task due to camera motion, cluttered background, occlusions, complexity of human movements, and variety of same actions performed by distinct subjects. Thus, the proposed method overcomes such problems by using the fusion of featur
... Show MoreUniversal image stego-analytic has become an important issue due to the natural images features curse of dimensionality. Deep neural networks, especially deep convolution networks, have been widely used for the problem of universal image stegoanalytic design. This paper describes the effect of selecting suitable value for number of levels during image pre-processing with Dual Tree Complex Wavelet Transform. This value may significantly affect the detection accuracy which is obtained to evaluate the performance of the proposed system. The proposed system is evaluated using three content-adaptive methods, named Highly Undetetable steGO (HUGO), Wavelet Obtained Weights (WOW) and UNIversal WAvelet Relative Distortion (UNIWARD).
The obtain
Background: Nowadays, the environmentally friendly procedures must be developed to avoid using harmful compounds in synthesis methods. Their increase interest in creating and researching silver nanoparticles (AgNPs) because of their numerous applications in many fields especially medical fields such as burn, wound healing, dental and bone implants, antibacterial, viral, fungal, and arthropodal activities. Biosynthesis of nanoparticles mediated pigments have been widely used as antimicrobial agent against microorganisms. Silver nanoparticles had synthesized by using melanin from locally isolate Pseudomonas aeruginosa, and used as antimicrobial activity against pathogenic microorganisms. Aim of the study: Isolation of Pseudomonas aeruginosa
... Show MoreIn this research we will present the signature as a key to the biometric authentication technique. I shall use moment invariants as a tool to make a decision about any signature which is belonging to the certain person or not. Eighteen voluntaries give 108 signatures as a sample to test the proposed system, six samples belong to each person were taken. Moment invariants are used to build a feature vector stored in this system. Euclidean distance measure used to compute the distance between the specific signatures of persons saved in this system and with new sample acquired to same persons for making decision about the new signature. Each signature is acquired by scanner in jpg format with 300DPI. Matlab used to implement this system.
Die vorliegende Forschung handelt es um die Satzfelder, besonders das Mittelfeld des Satzes im deutschen und Arabischen. Diese Forschung wurde mit der Satzdefinition, Satzglieder begonnen, damit wir diese klar werden und dann werden die Felder des Satzes gut gekannt. Der erste Abschnitt schlieβt auch den Mittelfeld des Satzes und, wie man das Feld erkennen und bestimmen kann. Die Forschung untersucht auch. Ob es in der arabischen Sprache den selben Struktur wie im Deutschen gibt, z.B Bildung des Satzes sowie Satzfelder bezügllich das Mittelfeld.
Der zweite Abschnitt handelt sich um den arabischen Teil und behandelt die Wortarten im Arabischen sowie den Satz als auch Satzarten (Nominal- Verbal- Halbsatz).
Danach befinden
... Show MoreTranslation is a dynamic and living process that cannot be considered equal to the original text and requires the appropriate structure, language, thought and culture of the target language, and the translator's intellectual, linguistic and cultural influences inadvertently penetrate into the translated text. It causes heterogeneity of the destination text with the source text.
Admiral's theory is trying to help by providing components and suggested approaches to resolve these inconsistencies. In the meantime, in addition to the mission of putting words together, the translator must sometimes sit in the position of the reader and judge and evaluate the translated text in order to understand its shortcomings and try to correct it a
... Show MoreIn this study, the stress-strength model R = P(Y < X < Z) is discussed as an important parts of reliability system by assuming that the random variables follow Invers Rayleigh Distribution. Some traditional estimation methods are used to estimate the parameters namely; Maximum Likelihood, Moment method, and Uniformly Minimum Variance Unbiased estimator and Shrinkage estimator using three types of shrinkage weight factors. As well as, Monte Carlo simulation are used to compare the estimation methods based on mean squared error criteria.
In our research, we dealt with one of the most important issues of linguistic studies of the Holy Qur’an, which is the words that are close in meaning, which some believe are synonyms, but in the Arabic language they are not considered synonyms because there are subtle differences between them. Synonyms in the Arabic language are very few, rather rare, and in the Holy Qur’an they are completely non-existent. And how were these words, close in meaning, translated in the translation of the Holy Qur’an by Almir Kuliev into the Russian language.
In the image processing’s field and computer vision it’s important to represent the image by its information. Image information comes from the image’s features that extracted from it using feature detection/extraction techniques and features description. Features in computer vision define informative data. For human eye its perfect to extract information from raw image, but computer cannot recognize image information. This is why various feature extraction techniques have been presented and progressed rapidly. This paper presents a general overview of the feature extraction categories for image.