Plagiarism is described as using someone else's ideas or work without their permission. Using lexical and semantic text similarity notions, this paper presents a plagiarism detection system for examining suspicious texts against available sources on the Web. The user can upload suspicious files in pdf or docx formats. The system will search three popular search engines for the source text (Google, Bing, and Yahoo) and try to identify the top five results for each search engine on the first retrieved page. The corpus is made up of the downloaded files and scraped web page text of the search engines' results. The corpus text and suspicious documents will then be encoded as vectors. For lexical plagiarism detection, the system will leverage Jaccard similarity and Term Frequency-Inverse Document Frequency (TFIDF) techniques, while for semantic plagiarism detection, Doc2Vec and Sentence Bidirectional Encoder Representations from Transformers (SBERT) intelligent text representation models will be used. Following that, the system compares the suspicious text to the corpus text. Finally, a generated plagiarism report will show the total plagiarism ratio, the plagiarism ratio from each source, and other details.
Deepfake is a type of artificial intelligence used to create convincing images, audio, and video hoaxes and it concerns celebrities and everyone because they are easy to manufacture. Deepfake are hard to recognize by people and current approaches, especially high-quality ones. As a defense against Deepfake techniques, various methods to detect Deepfake in images have been suggested. Most of them had limitations, like only working with one face in an image. The face has to be facing forward, with both eyes and the mouth open, depending on what part of the face they worked on. Other than that, a few focus on the impact of pre-processing steps on the detection accuracy of the models. This paper introduces a framework design focused on this asp
... Show MoreEpilepsy is one of the most common diseases of the nervous system around the world, affecting all age groups and causing seizures leading to loss of control for a period of time. This study presents a seizure detection algorithm that uses Discrete Cosine Transformation (DCT) type II to transform the signal into frequency-domain and extracts energy features from 16 sub-bands. Also, an automatic channel selection method is proposed to select the best subset among 23 channels based on the maximum variance. Data are segmented into frames of one Second length without overlapping between successive frames. K-Nearest Neighbour (KNN) model is used to detect those frames either to ictal (seizure) or interictal (non-
... Show MoreMost intrusion detection systems are signature based that work similar to anti-virus but they are unable to detect the zero-day attacks. The importance of the anomaly based IDS has raised because of its ability to deal with the unknown attacks. However smart attacks are appeared to compromise the detection ability of the anomaly based IDS. By considering these weak points the proposed
system is developed to overcome them. The proposed system is a development to the well-known payload anomaly detector (PAYL). By
combining two stages with the PAYL detector, it gives good detection ability and acceptable ratio of false positive. The proposed system improve the models recognition ability in the PAYL detector, for a filtered unencrypt
Generally, radiologists analyse the Magnetic Resonance Imaging (MRI) by visual inspection to detect and identify the presence of tumour or abnormal tissue in brain MR images. The huge number of such MR images makes this visual interpretation process, not only laborious and expensive but often erroneous. Furthermore, the human eye and brain sensitivity to elucidate such images gets reduced with the increase of number of cases, especially when only some slices contain information of the affected area. Therefore, an automated system for the analysis and classification of MR images is mandatory. In this paper, we propose a new method for abnormality detection from T1-Weighted MRI of human head scans using three planes, including axial plane, co
... Show MoreTexture is an important characteristic for the analysis of many types of images because it provides a rich source of information about the image. Also it provides a key to understand basic mechanisms that underlie human visual perception. In this paper four statistical feature of texture (Contrast, Correlation, Homogeneity and Energy) was calculated from gray level Co-occurrence matrix (GLCM) of equal blocks (30×30) from both tumor tissue and normal tissue of three samples of CT-scan image of patients with lung cancer. It was found that the contrast feature is the best to differentiate between textures, while the correlation is not suitable for comparison, the energy and homogeneity features for tumor tissue always greater than its valu
... Show MorePavement crack and pothole identification are important tasks in transportation maintenance and road safety. This study offers a novel technique for automatic asphalt pavement crack and pothole detection which is based on image processing. Different types of cracks (transverse, longitudinal, alligator-type, and potholes) can be identified with such techniques. The goal of this research is to evaluate road surface damage by extracting cracks and potholes, categorizing them from images and videos, and comparing the manual and the automated methods. The proposed method was tested on 50 images. The results obtained from image processing showed that the proposed method can detect cracks and potholes and identify their severity levels wit
... Show MoreMagnetic Resonance Imaging (MRI) is one of the most important diagnostic tool. There are many methods to segment the
tumor of human brain. One of these, the conventional method that uses pure image processing techniques that are not preferred because they need human interaction for accurate segmentation. But unsupervised methods do not require any human interference and can segment the brain with high precision. In this project, the unsupervised classification methods have been used in order to detect the tumor disease from MRI images. These metho
... Show MoreGround penetrating radar (GPR) is one of the Remote Sensing methods that utilize electromagnetic waves in the detection of subjects below the surface to record Once the data were collected, it could be presented in map and 2D and 3D. GPR method was applied in detecting graves (Tel Alags archaeological) fact, within the administrative border to spend Rumitha can be challenging. Due to the sensitivity of these sites, the challenge is to explore the subsurface without disturbing the soil Some cemeteries are hundreds of years old. Often records are vague or incomplete and there may be serious doubt about the precise extent of a cemetery .GPR is the most practical way to sort out the site was to carry out a detailed grid survey. A Noggin 250
... Show MoreThe Normalization Difference Vegetation Index (NDVI), for many years, was widely used in remote sensing for the detection of vegetation land cover. This index uses red channel radiances (i.e., 0.66 μm reflectance) and near-IR channel (i.e., 0.86 μm reflectance). In the heavy chlorophyll absorption area, the red channel is located, while in the high reflectance plateau of vegetation canopies, the Near-IR channel is situated. Senses of channels (Red & Near- IR) read variance depths over vegetation canopies. In the present study, a further index for vegetation identification is proposed. The normalized difference vegetation shortwave index (NDVSI) is defined as the difference between the cubic bands of Near- IR and Shortwave infrared
... Show MoreThe Internet of Things (IoT) has become a hot area of research in recent years due to the significant advancements in the semiconductor industry, wireless communication technologies, and the realization of its ability in numerous applications such as smart homes, health care, control systems, and military. Furthermore, IoT devices inefficient security has led to an increase cybersecurity risks such as IoT botnets, which have become a serious threat. To counter this threat there is a need to develop a model for detecting IoT botnets.
This paper's contribution is to formulate the IoT botnet detection problem and introduce multiple linear regression (MLR) for modelling IoT botnet features with discriminating capability and alleviatin
... Show More