"Watermarking" is one method in which digital information is buried in a carrier signal;
the hidden information should be related to the carrier signal. There are many different types of
digital watermarking, including traditional watermarking that uses visible media (such as snaps,
images, or video), and a signal may be carrying many watermarks. Any signal that can tolerate
noise, such as audio, video, or picture data, can have a digital watermark implanted in it. A digital
watermark must be able to withstand changes that can be made to the carrier signal in order to
protect copyright information in media files. The goal of digital watermarking is to ensure the
integrity of data, whereas steganography focuses on making information undetectable to humans.
Watermarking doesn't alter the original digital image, unlike public-key encryption, but rather
creates a new one with embedded secured aspects for integrity. There are no residual effects of
encryption on decrypted documents. This work focuses on strong digital image watermarking
algorithms for copyright protection purposes. Watermarks of various sorts and uses were
discussed, as well as a review of current watermarking techniques and assaults. The project shows
how to watermark an image in the frequency domain using DCT and DWT, as well as in the spatial
domain using the LSB approach. When it comes to noise and compression, frequency-domain
approaches are far more resilient than LSB. All of these scenarios necessitate the use of the original
picture to remove the watermark. Out of the three, the DWT approach has provided the best results.
We can improve the resilience of our watermark while having little to no extra influence on image
quality by embedding watermarks in these places.
Plagiarism is described as using someone else's ideas or work without their permission. Using lexical and semantic text similarity notions, this paper presents a plagiarism detection system for examining suspicious texts against available sources on the Web. The user can upload suspicious files in pdf or docx formats. The system will search three popular search engines for the source text (Google, Bing, and Yahoo) and try to identify the top five results for each search engine on the first retrieved page. The corpus is made up of the downloaded files and scraped web page text of the search engines' results. The corpus text and suspicious documents will then be encoded as vectors. For lexical plagiarism detection, the system will
... Show MoreThe goal of this research is to develop a numerical model that can be used to simulate the sedimentation process under two scenarios: first, the flocculation unit is on duty, and second, the flocculation unit is out of commission. The general equation of flow and sediment transport were solved using the finite difference method, then coded using Matlab software. The result of this study was: the difference in removal efficiency between the coded model and operational model for each particle size dataset was very close, with a difference value of +3.01%, indicating that the model can be used to predict the removal efficiency of a rectangular sedimentation basin. The study also revealed
The aim of this research is to compare traditional and modern methods to obtain the optimal solution using dynamic programming and intelligent algorithms to solve the problems of project management.
It shows the possible ways in which these problems can be addressed, drawing on a schedule of interrelated and sequential activities And clarifies the relationships between the activities to determine the beginning and end of each activity and determine the duration and cost of the total project and estimate the times used by each activity and determine the objectives sought by the project through planning, implementation and monitoring to maintain the budget assessed
... Show More<span>Digital audio is required to transmit large sizes of audio information through the most common communication systems; in turn this leads to more challenges in both storage and archieving. In this paper, an efficient audio compressive scheme is proposed, it depends on combined transform coding scheme; it is consist of i) bi-orthogonal (tab 9/7) wavelet transform to decompose the audio signal into low & multi high sub-bands, ii) then the produced sub-bands passed through DCT to de-correlate the signal, iii) the product of the combined transform stage is passed through progressive hierarchical quantization, then traditional run-length encoding (RLE), iv) and finally LZW coding to generate the output mate bitstream.
... Show MoreNano- particles (Ag NPs) are synthesized by using plasma Jet argon gas. The prepared Ag NPs are characterized by Atomic Absorption Spectroscopy (AAS) The measure was performed for different time exposuring 15,30,45 and 60 sec. The results shows the low concentration of nano-silver time expose (15 sec) and very) and high concentration at 60 sec. The UV-VIS spectrometer for nano-silver different time exsposuring to plasma, shows the Surface Plasmon Resonance (SPR) appeared around 419 nm, and the energy gab is 4.1 eV for the 15 second exposure and 1.6eV for 60 second exposure. The Scanning Probe Microscope (SPM) is used to identify the characterization of silver nanoparticles, the average diameter of nano-silver for 15 second exp
... Show MoreThis work deals with the separation of benzene and toluene from a BTX fraction. The separation was carried out using adsorption by molecular sieve zeolite 13X in a fixed bed. The concentration of benzene and toluene in the influent streams was measured using gas chromatography. The effect of flow rate in the range 0.77 – 2.0 cm3/min on the benzene and toluene extraction from BTX fraction was studied. The flow rate increasing decreases the breakthrough and saturation times. The effect of bed height in the range 31.6 – 63.3 cm on benzene and toluene adsorption from BTX fraction was studied. The increase of bed height increasing increases the break point values. The effect of the concentration of benzene in the range 0.0559 – 0.2625g/
... Show MoreA new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th
... Show MoreThis paper is focusing on reducing the time for text processing operations by taking the advantage of enumerating each string using the multi hashing methodology. Text analysis is an important subject for any system that deals with strings (sequences of characters from an alphabet) and text processing (e.g., word-processor, text editor and other text manipulation systems). Many problems have been arisen when dealing with string operations which consist of an unfixed number of characters (e.g., the execution time); this due to the overhead embedded-operations (like, symbols matching and conversion operations). The execution time largely depends on the string characteristics; especially its length (i.e., the number of characters consisting
... Show More