Communication is one of the vast and rapidly growing fields of engineering, where
increasing the efficiency of communication by overcoming the external
electromagnetic sources and noise is considered a challenging task. To achieve
confidentiality for color image transmission over the noisy communication channels
a proposed algorithm is presented for image encryption using AES algorithm. This
algorithm combined with error detections using Cyclic Redundancy Check (CRC) to
preserve the integrity of the encrypted data. This paper presents an error detection
method uses Cyclic Redundancy Check (CRC), the CRC value can be generated by
two methods: Serial and Parallel CRC Implementation. The proposed algorithm for
the encryption and error detection using parallel CRC64 (Slicing-by-4 algorithm)
implementation with multiple look table approach for the encrypted image. The goal
of the proposed algorithm optimizes the size of the redundant bits needed to attach
to the original data for the purpose of error detection; this reduction is considered
necessary to meet the restriction for some computer architectures. Furthermore, it is
suitable for implementing in software rather than in hardware. The proposed
algorithm uses different tested images by added different noise ratios (1% and 5%)
of total images size to study the noise effect on the encrypted images. The noise
added on single and multi bits position and study the effect on the output results.
The obtained results shown that the small size of the image the large CRC64
affected by noise while the large size of image yields a stable or fixed number of
affected CRC64.
Elliptic Curve Cryptography (ECC) is one of the public key cryptosystems that works based on the algebraic models in the form of elliptic curves. Usually, in ECC to implement the encryption, the encoding of data must be carried out on the elliptic curve, which seems to be a preprocessing step. Similarly, after the decryption a post processing step must be conducted for mapping or decoding the corresponding data to the exact point on the elliptic curves. The Memory Mapping (MM) and Koblitz Encoding (KE) are the commonly used encoding models. But both encoding models have drawbacks as the MM needs more memory for processing and the KE needs more computational resources. To overcome these issues the proposed enhanced Koblitz encodi
... Show MoreThe paired sample t-test is a type of classical test statistics that is used to test the difference between two means in paired data, but it is not robust against the violation of the normality assumption. In this paper, some alternative robust tests are suggested by combining the Jackknife resampling with each of the Wilcoxon signed-rank test for small sample size and Wilcoxon signed-rank test for large sample size, using normal approximation. The Monte Carlo simulation experiments were employed to study the performance of the test statistics of each of these tests depending on the type one error rates and the power rates of the test statistics. All these tests were applied on different sa
... Show MoreBefore users store data in the cloud, many security issues must be addressed, as they will have no direct control over the data that has been outsourced to the cloud, particularly personal and sensitive data (health, finance, military, etc.). This article proposes a system based on chaotic maps for private key generation. A hybrid encryption for fast and secure cryptography. In addition to a multi-cloud storage with Pseudonymized file names to preserve user data privacy on the cloud while minimizing data loss. As well as a hash approach to check data integrity. AES in combination with RSA and fragmenting the file is used for the encryption. Integrity is cheeked using SHA-3. The experiments demonstrated that the key generation stra
... Show MoreMultiple eliminations (de-multiple) are one of seismic processing steps to remove their effects and delineate the correct primary refractors. Using normal move out to flatten primaries is the way to eliminate multiples through transforming these data to frequency-wavenumber domain. The flatten primaries are aligned with zero axis of the frequency-wavenumber domain and any other reflection types (multiples and random noise) are distributed elsewhere. Dip-filter is applied to pass the aligned data and reject others will separate primaries from multiple after transforming the data back from frequency-wavenumber domain to time-distance domain. For that, a suggested name for this technique as normal move out- frequency-wavenumber domain
... Show MoreThis paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one
... Show MoreConcealing the existence of secret hidden message inside a cover object is known as steganography, which is a powerful technique. We can provide a secret communication between sender and receiver using Steganography. In this paper, the main goal is for hiding secret message into the pixels using Least Significant Bit (LSB) of blue sector of the cover image. Therefore, the objective is by mapping technique presenting a model for hiding text in an image. In the model for proposing the secret message, convert text to binary also the covering (image) is divided into its three original colors, Red, Green and Blue (RGB) , use the Blue sector convert it to binary, hide two bits from the message in two bits of the least significant b
... Show MoreThe Internet is providing vital communications between millions of individuals. It is also more and more utilized as one of the commerce tools; thus, security is of high importance for securing communications and protecting vital information. Cryptography algorithms are essential in the field of security. Brute force attacks are the major Data Encryption Standard attacks. This is the main reason that warranted the need to use the improved structure of the Data Encryption Standard algorithm. This paper proposes a new, improved structure for Data Encryption Standard to make it secure and immune to attacks. The improved structure of Data Encryption Standard was accomplished using standard Data Encryption Standard with a new way of two key gene
... Show More