Plagiarism is becoming more of a problem in academics. It’s made worse by the ease with which a wide range of resources can be found on the internet, as well as the ease with which they can be copied and pasted. It is academic theft since the perpetrator has ”taken” and presented the work of others as his or her own. Manual detection of plagiarism by a human being is difficult, imprecise, and time-consuming because it is difficult for anyone to compare their work to current data. Plagiarism is a big problem in higher education, and it can happen on any topic. Plagiarism detection has been studied in many scientific articles, and methods for recognition have been created utilizing the Plagiarism analysis, Authorship identification, and Near-duplicate detection (PAN) Dataset 2009- 2011. Verbatim plagiarism, according to the researchers, plagiarism is simply copying and pasting. They then moved on to smart plagiarism, which is more challenging to spot since it might include text change, taking ideas from other academics, and translation into a more difficult-to-manage language. Other studies have found that plagiarism can obscure the scientific content of publications by swapping words, removing or adding material, or reordering or changing the original articles. This article discusses the comparative study of plagiarism detection techniques.
Cohesion is well known as the study of the relationships, whether grammatical and/or lexical, between the different elements of a particular text by the use of what are commonly called 'cohesive devices'. These devices bring connectivity and bind a text together. Besides, the nature and the amount of such cohesive devices usually affect the understanding of that text in the sense of making it easier to comprehend. The present study is intendedto examine the use of grammatical cohesive devicesin relation to narrative techniques. The story of Joseph from the Holy Quran has been selected to be examined by using Halliday and Hasan's Model of Cohesion (1976, 1989). The aim of the study is to comparatively examine to what extent the type
... Show MoreIraq territory as a whole and south of Iraq in particular encountered rapid desertification and signs of severe land degradation in the last decades. Both natural and anthropogenic factors are responsible for the extent of desertification. Remote sensing data and image analysis tools were employed to identify, detect, and monitor desertification in Basra governorate. Different remote sensing indicators and image indices were applied in order to better identify the desertification development in the study area, including the Normalized difference vegetation index (NDVI), Normalized Difference Water Index (NDWI), Salinity index (SI), Top Soil Grain Size Index (GSI) , Land Surface Temperature (LST) , Land Surface Soil Moisture (LSM), and La
... Show MoreThis study entitled (The legal framework for the process of monitoring the electoral register (a comparative study between Egypt and Iraq)) shows the importance of monitoring the right to participate in political life and public affairs، as all electoral legislation in democratic countries is keen on the integrity، integrity and legitimacy of elections، and one of the most important guarantees of this Existence of effective oversight at every stage of the electoral process، including the preliminary stage. Oversight is the process of collecting and inventorying information about the electoral processes in all its stages، by following an organized mechanism in collecting information on each stage، which is then used to issue o
... Show MoreImage quality plays a vital role in improving and assessing image compression performance. Image compression represents big image data to a new image with a smaller size suitable for storage and transmission. This paper aims to evaluate the implementation of the hybrid techniques-based tensor product mixed transform. Compression and quality metrics such as compression-ratio (CR), rate-distortion (RD), peak signal-to-noise ratio (PSNR), and Structural Content (SC) are utilized for evaluating the hybrid techniques. Then, a comparison between techniques is achieved according to these metrics to estimate the best technique. The main contribution is to improve the hybrid techniques. The proposed hybrid techniques are consisting of discrete wavel
... Show MoreFace Identification system is an active research area in these years. However, the accuracy and its dependency in real life systems are still questionable. Earlier research in face identification systems demonstrated that LBP based face recognition systems are preferred than others and give adequate accuracy. It is robust against illumination changes and considered as a high-speed algorithm. Performance metrics for such systems are calculated from time delay and accuracy. This paper introduces an improved face recognition system that is build using C++ programming language with the help of OpenCV library. Accuracy can be increased if a filter or combinations of filters are applied to the images. The accuracy increases from 95.5% (without ap
... Show MoreFusion can be described as the process of integrating information resulting from the collection of two or more images from different sources to form a single integrated image. This image will be more productive, informative, descriptive and qualitative as compared to original input images or individual images. Fusion technology in medical images is useful for the purpose of diagnosing disease and robot surgery for physicians. This paper describes different techniques for the fusion of medical images and their quality studies based on quantitative statistical analysis by studying the statistical characteristics of the image targets in the region of the edges and studying the differences between the classes in the image and the calculation
... Show MoreAn oil spill is a leakage of pipelines, vessels, oil rigs, or tankers that leads to the release of petroleum products into the marine environment or on land that happened naturally or due to human action, which resulted in severe damages and financial loss. Satellite imagery is one of the powerful tools currently utilized for capturing and getting vital information from the Earth's surface. But the complexity and the vast amount of data make it challenging and time-consuming for humans to process. However, with the advancement of deep learning techniques, the processes are now computerized for finding vital information using real-time satellite images. This paper applied three deep-learning algorithms for satellite image classification
... Show MoreImage Fusion is being used to gather important data from such an input image array and to place it in a single output picture to make it much more meaningful & usable than either of the input images. Image fusion boosts the quality and application of data. The accuracy of the image that has fused depending on the application. It is widely used in smart robotics, audio camera fusion, photonics, system control and output, construction and inspection of electronic circuits, complex computer, software diagnostics, also smart line assembling robots. In this paper provides a literature review of different image fusion techniques in the spatial domain and frequency domain, such as averaging, min-max, block substitution, Intensity-Hue-Saturation(IH
... Show MoreBackground: In young adults, multiple sclerosis is a prevalent chronic inflammatory demyelinating condition. It is characterized by white matter affection, but many individuals also have significant gray matter involvement. A double-inversion recovery pulse (DIR) pattern was recently proposed to improve the visibility of multiple sclerosis lesions. Objective: To find out how well a DIR sequence, FLAIR, and T2-weighted pulse sequences can find MS lesions in the supratentorial and infratentorial regions. Methods: A total of 37 patients with established diagnoses of multiple sclerosis were included in this cross-sectional study. Brain MRI was done using double inversion recovery, T2, and FLAIR sequences. The number of lesions was count
... Show MoreImage pattern classification is considered a significant step for image and video processing.Although various image pattern algorithms have been proposed so far that achieved adequate classification,achieving higher accuracy while reducing the computation time remains challenging to date. A robust imagepattern classification method is essential to obtain the desired accuracy. This method can be accuratelyclassify image blocks into plain, edge, and texture (PET) using an efficient feature extraction mechanism.Moreover, to date, most of the existing studies are focused on evaluating their methods based on specificorthogonal moments, which limits the understanding of their potential application to various DiscreteOrthogonal Moments (DOMs). The
... Show More