Communication is one of the vast and rapidly growing fields of engineering, where
increasing the efficiency of communication by overcoming the external
electromagnetic sources and noise is considered a challenging task. To achieve
confidentiality for color image transmission over the noisy communication channels
a proposed algorithm is presented for image encryption using AES algorithm. This
algorithm combined with error detections using Cyclic Redundancy Check (CRC) to
preserve the integrity of the encrypted data. This paper presents an error detection
method uses Cyclic Redundancy Check (CRC), the CRC value can be generated by
two methods: Serial and Parallel CRC Implementation. The proposed algorithm for
the encryption and error detection using parallel CRC64 (Slicing-by-4 algorithm)
implementation with multiple look table approach for the encrypted image. The goal
of the proposed algorithm optimizes the size of the redundant bits needed to attach
to the original data for the purpose of error detection; this reduction is considered
necessary to meet the restriction for some computer architectures. Furthermore, it is
suitable for implementing in software rather than in hardware. The proposed
algorithm uses different tested images by added different noise ratios (1% and 5%)
of total images size to study the noise effect on the encrypted images. The noise
added on single and multi bits position and study the effect on the output results.
The obtained results shown that the small size of the image the large CRC64
affected by noise while the large size of image yields a stable or fixed number of
affected CRC64.
In this paper, we present multiple bit error correction coding scheme based on extended Hamming product code combined with type II HARQ using shared resources for on chip interconnect. The shared resources reduce the hardware complexity of the encoder and decoder compared to the existing three stages iterative decoding method for on chip interconnects. The proposed method of decoding achieves 20% and 28% reduction in area and power consumption respectively, with only small increase in decoder delay compared to the existing three stage iterative decoding scheme for multiple bit error correction. The proposed code also achieves excellent improvement in residual flit error rate and up to 58% of total power consumption compared to the other err
... Show MoreThis paper includes a comparison between denoising techniques by using statistical approach, principal component analysis with local pixel grouping (PCA-LPG), this procedure is iterated second time to further improve the denoising performance, and other enhancement filters were used. Like adaptive Wiener low pass-filter to a grayscale image that has been degraded by constant power additive noise, based on statistics estimated from a local neighborhood of each pixel. Performs Median filter of the input noisy image, each output pixel contains the Median value in the M-by-N neighborhood around the corresponding pixel in the input image, Gaussian low pass-filter and Order-statistic filter also be used. Experimental results shows LPG-PCA method
... Show MoreThis paper includes a comparison between denoising techniques by using statistical approach, principal component analysis with local pixel grouping (PCA-LPG), this procedure is iterated second time to further improve the denoising performance, and other enhancement filters were used. Like adaptive Wiener low pass-filter to a grayscale image that has been degraded by constant power additive noise, based on statistics estimated from a local neighborhood of each pixel. Performs Median filter of the input noisy image, each output pixel contains the Median value in the M-by-N neighborhood around the corresponding pixel in the input image, Gaussian low pass-filter and Order-statistic filter also be used.
Experimental results shows LPG-
... Show MoreIris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the origin
... Show MoreThe digital world has been witnessing a fast progress in technology, which led to an enormous increase in using digital devices, such as cell phones, laptops, and digital cameras. Thus, photographs and videos function as the primary sources of legal proof in courtrooms concerning any incident or crime. It has become important to prove the trustworthiness of digital multimedia. Inter-frame video forgery one of common types of video manipulation performed in temporal domain. It deals with inter-frame video forgery detection that involves frame deletion, insertion, duplication, and shuffling. Deep Learning (DL) techniques have been proven effective in analysis and processing of visual media. Dealing with video data needs to handle th
... Show MoreSpeech encryption approaches are used to prevent eavesdropping, tracking, and other security concerns in speech communication. In this paper, a new cryptography algorithm is proposed to encrypt digital speech files. Initially, the digital speech files are rearranged as a cubic model with six sides to scatter speech data. Furthermore, each side is encrypted by random keys that are created by using two chaotic maps (Hénon and Gingerbread chaotic maps). Encryption for each side of the cube is achieved, using the based map vector that is generated randomly by using a simple random function. Map vector that consists of six bits, each bit refers to one of the specific chaotic maps that generate a random key to encrypt each face of the cube. R
... Show MoreIn this paper, a compression system with high synthetic architect is introduced, it is based on wavelet transform, polynomial representation and quadtree coding. The bio-orthogonal (tap 9/7) wavelet transform is used to decompose the image signal, and 2D polynomial representation is utilized to prune the existing high scale variation of image signal. Quantization with quadtree coding are followed by shift coding are applied to compress the detail band and the residue part of approximation subband. The test results indicate that the introduced system is simple and fast and it leads to better compression gain in comparison with the case of using first order polynomial approximation.
Restoration is the main process in many applications. Restoring an original image from a damaged image is the foundation of the restoring operation, either blind or non-blind. One of the main challenges in the restoration process is to estimate the degradation parameters. The degradation parameters include Blurring Function (Point Spread Function, PSF) and Noise Function. The most common causes of image degradation are errors in transmission channels, defects in the optical system, inhomogeneous medium, relative motion between object and camera, etc. In our research, a novel algorithm was adopted based on Circular Hough Transform used to estimate the width (radius, sigma) of the Point Spread Function. This algorithm is based o
... Show More