In this paper, an algorithm through which we can embed more data than the
regular methods under spatial domain is introduced. We compressed the secret data
using Huffman coding and then this compressed data is embedded using laplacian
sharpening method.
We used Laplace filters to determine the effective hiding places, then based on
threshold value we found the places with the highest values acquired from these filters
for embedding the watermark. In this work our aim is increasing the capacity of
information which is to be embedded by using Huffman code and at the same time
increasing the security of the algorithm by hiding data in the places that have highest
values of edges and less noticeable.
The performance of the proposed algorithm is evaluated using detection
techniques such as Peak Signal- to- Noise Ratio (PSNR) to measure the distortion,
Similarity Correlation between the cover-image and watermarked image, and Bit
Error Rate (BER) is used to measure the robustness. The sensitivity against attacks on
the watermarked image is investigated. The types of attacks applied are: Laplacian
sharpening, Median filtering, Salt & Peppers Noise and Rotating attack. The results
show that the proposed algorithm can resist Laplacain sharpening with any sharpening
parameter k, besides laplacian good result according to some other types of attacks is
achieved.
Free-Space Optical (FSO) can provide high-speed communications when the effect of turbulence is not serious. However, Space-Time-Block-Code (STBC) is a good candidate to mitigate this seriousness. This paper proposes a hybrid of an Optical Code Division Multiple Access (OCDMA) and STBC in FSO communication for last mile solutions, where access to remote areas is complicated. The main weakness effecting a FSO link is the atmospheric turbulence. The feasibility of employing STBC in OCDMA is to mitigate these effects. The current work evaluates the Bit-Error-Rate (BER) performance of OCDMA operating under the scintillation effect, where this effect can be described by the gamma-gamma model. The most obvious finding to emerge from the analysis
... Show MoreBlockchain is an innovative technology that has gained interest in all sectors in the era of digital transformation where it manages transactions and saves them in a database. With the increasing financial transactions and the rapidly developed society with growing businesses many people looking for the dream of a better financially independent life, stray from large corporations and organizations to form startups and small businesses. Recently, the increasing demand for employees or institutes to prepare and manage contracts, papers, and the verifications process, in addition to human mistakes led to the emergence of a smart contract. The smart contract has been developed to save time and provide more confidence while dealing, as well a
... Show MoreABSTRACT
Naproxen(NPX) imprinted liquid electrodes of polymers are built using polymerization precipitation. The molecularly imprinted (MIP) and non imprinted (NIP) polymers were synthesized using NPX as a template. In the polymerization precipitation involved, styrene(STY) was used as monomer, N,N-methylenediacrylamide (N,N-MDAM) as a cross-linker and benzoyl peroxide (BPO) as an initiator. The molecularly imprinted membranes and the non-imprinted membranes were prepared using acetophenone(AOPH) and di octylphathalate(DOP)as plasticizers in PVC matrix. The slopes and detection limits of the liquid electrodes ranged from)-18.1,-17.72 (mV/decade and )4.0 x 10-
... Show MoreIn this research work, a modified DCT descriptor are presented to mosaics the satellite images based on Abdul Kareem [1] similarity criterion are presented, new method which is proposed to speed up the mosaics process is presented. The results of applying the modified DCT descriptor are compared with the mosaics method using RMSE similarity criterion which prove that the modified DCT descriptor to be fast and accurate mosaics method.
In this paper, we will present proposed enhance process of image compression by using RLE algorithm. This proposed yield to decrease the size of compressing image, but the original method used primarily for compressing a binary images [1].Which will yield increasing the size of an original image mostly when used for color images. The test of an enhanced algorithm is performed on sample consists of ten BMP 24-bit true color images, building an application by using visual basic 6.0 to show the size after and before compression process and computing the compression ratio for RLE and for the enhanced RLE algorithm
In the recent years, remote sensing applications have a great interest because it's offers many advantages, benefits and possibilities for the applications that using this concept, satellite it's one must important applications for remote sensing, it's provide us with multispectral images allow as study many problems like changing in ecological cover or biodiversity for earth surfers, and illustrated biological diversity of the studied areas by the presentation of the different areas of the scene taken depending on the length of the characteristic wave, Thresholding it's a common used operation for image segmentation, it's seek to extract a monochrome image from gray image by segment this image to two region (for
... Show MoreIn this research, an analysis for the standard Hueckel edge detection algorithm behaviour by using three dimensional representations for the edge goodness criterion is presents after applying it on a real high texture satellite image, where the edge goodness criterion is analysis statistically. The Hueckel edge detection algorithm showed a forward exponential relationship between the execution time with the used disk radius. Hueckel restrictions that mentioned in his papers are adopted in this research. A discussion for the resultant edge shape and malformation is presented, since this is the first practical study of applying Hueckel edge detection algorithm on a real high texture image containing ramp edges (satellite image).
Hueckel edge detector study using binary step edge image is presented. The standard algorithm that Hueckel presented, in his paper without any alteration is adopted. This paper studies a fully analysis for the algorithm efficiency, time consuming and the expected results with slide window size and edge direction. An analysis for its behavior with the changing of the slide window size (disk size) is presented. The best result is acquired when the window size equals to four pixel.