Preferred Language
Articles
/
Vhg2WZQBVTCNdQwCvAVr
Image Cryptography Based on Chebyshev Polynomials and Transposition- Substitution Transformations
...Show More Authors

The confirming of security and confidentiality of multimedia data is a serious challenge through the growing dependence on digital communication. This paper offers a new image cryptography based on the Chebyshev chaos polynomials map, via employing the randomness characteristic of chaos concept to improve security. The suggested method includes block shuffling, dynamic offset chaos key production, inter-layer XOR, and block 90 degree rotations to disorder the correlations intrinsic in image. The method is aimed for efficiency and scalability, accomplishing  complexity order for n-pixels over specific cipher rounds. The experiment outcomes depict great resistant to cryptanalysis attacks, containing statistical, differential and brute-force attacks, due to its big key space size and sensitivity to initial values. This algorithm be responsible for a forceful and flexible solution for acquiring secure images, appropriate for high resolution data and real time applications.

Crossref
View Publication
Publication Date
Sun Jan 01 2017
Journal Name
Ieee Access
Low-Distortion MMSE Speech Enhancement Estimator Based on Laplacian Prior
...Show More Authors

View Publication
Scopus (42)
Crossref (39)
Scopus Clarivate Crossref
Publication Date
Wed Jan 01 2020
Journal Name
Journal Of Southwest Jiaotong University
The Arithmetic Coding and Hybrid Discrete Wavelet and Cosine Transform Approaches in Image Compression
...Show More Authors

Image compression is one of the data compression types applied to digital images in order to reduce their high cost for storage and/or transmission. Image compression algorithms may take the benefit of visual sensitivity and statistical properties of image data to deliver superior results in comparison with generic data compression schemes, which are used for other digital data. In the first approach, the input image is divided into blocks, each of which is 16 x 16, 32 x 32, or 64 x 64 pixels. The blocks are converted first into a string; then, encoded by using a lossless and dictionary-based algorithm known as arithmetic coding. The more occurrence of the pixels values is codded in few bits compare with pixel values of less occurre

... Show More
View Publication
Crossref (3)
Crossref
Publication Date
Sat Feb 09 2019
Journal Name
Journal Of The College Of Education For Women
Comparative Study of Image Denoising Using Wavelet Transforms and Optimal Threshold and Neighbouring Window
...Show More Authors

NeighShrink is an efficient image denoising algorithm based on the discrete wavelet
transform (DWT). Its disadvantage is to use a suboptimal universal threshold and identical
neighbouring window size in all wavelet subbands. Dengwen and Wengang proposed an
improved method, which can determine an optimal threshold and neighbouring window size
for every subband by the Stein’s unbiased risk estimate (SURE). Its denoising performance is
considerably superior to NeighShrink and also outperforms SURE-LET, which is an up-todate
denoising algorithm based on the SURE. In this paper different wavelet transform
families are used with this improved method, the results show that Haar wavelet has the
lowest performance among

... Show More
View Publication Preview PDF
Publication Date
Sun Jun 11 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Artificial Neural Network for TIFF Image Compression
...Show More Authors

The main aim of image compression is to reduce the its size to be able for transforming and storage, therefore many methods appeared to compress the image, one of these methods is "Multilayer Perceptron ". Multilayer Perceptron (MLP) method which is artificial neural network based on the Back-Propagation algorithm for compressing the image. In case this algorithm depends upon the number of neurons in the hidden layer only the above mentioned will not be quite enough to reach the desired results, then we have to take into consideration the standards which the compression process depend on to get the best results. We have trained a group of TIFF images with the size of (256*256)  in our research, compressed them by using MLP for each

... Show More
View Publication Preview PDF
Crossref
Publication Date
Fri Apr 01 2016
Journal Name
Al–bahith Al–a'alami
Making political image in the election campaigns
...Show More Authors

The study discusses the marketing profile of electoral candidates and politicians especially the image that takes root in the minds of voters has become more important than the ideologies in the technological era or their party affiliations and voters are no longer paying attention to the concepts of a liberal, conservative, right-wing or secular, etc. while their interests have increased towards candidates. The consultants and image experts are able to make a dramatic shift in their electoral roles. They, as specialists in the electoral arena, dominate the roles of political parties.
The importance of the study comes from the fact that the image exceeds its normal framework in our contemporary world to become political and cultural

... Show More
View Publication Preview PDF
Crossref
Publication Date
Thu Jan 01 2015
Journal Name
International Journal Of Computer Science And Mobile Computing
Image Compression using Hierarchal Linear Polynomial Coding
...Show More Authors

Publication Date
Fri Jan 01 2016
Journal Name
Modern Applied Science
New Combined Technique for Fingerprint Image Enhancement
...Show More Authors

This paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one

... Show More
Publication Date
Wed Jan 01 2020
Journal Name
Solid State Technology
Image Fusion Using A Convolutional Neural Network
...Show More Authors

Image Fusion Using A Convolutional Neural Network

Publication Date
Sat Oct 05 2019
Journal Name
Journal Of Engineering And Applied Sciences
Secure Image Steganography using Biorthogonal Wavelet Transform
...Show More Authors

View Publication
Crossref (2)
Crossref
Publication Date
Wed Jan 01 2020
Journal Name
International Journal Of Software & Hardware Research In Engineering
Frontal Facial Image Compression of Hybrid Base
...Show More Authors