Preferred Language
Articles
/
jih-2121
An Improved Image Compression Technique Using EZW and SPHIT Algorithms
...Show More Authors

 Uncompressed form of the digital images are needed a very large storage capacity amount, as a consequence requires large communication bandwidth for data transmission over the network. Image compression techniques not only minimize the image storage space but also preserve the quality of image. This paper reveal image compression technique which uses distinct image coding scheme based on wavelet transform that combined effective types of compression algorithms for further compression. EZW and SPIHT algorithms are types of significant compression techniques that obtainable for lossy image compression algorithms. The EZW coding is a worthwhile and simple efficient algorithm. SPIHT is an most powerful technique that utilize for image compression depend on the concept of coding set of wavelet coefficients as zero trees. The proposed compression algorithm that combined dual image compression techniques (DICT) invest an excellent features from each methods, which then produce promising technique for still image compression through minimize bits number that required to represent the input image, to the degree allowed without significant impact on quality of reconstructed image.

The experimental results present that DICT will improve the image compression efficiency between 8 to 24%, and will result high performance of metrics values.

Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Sat May 30 2020
Journal Name
Neuroquantology Journal
The Effect of Re-Use of Lossy JPEG Compression Algorithm on the Quality of Satellite Image
...Show More Authors

In this study, an analysis of re-using the JPEG lossy algorithm on the quality of satellite imagery is presented. The standard JPEG compression algorithm is adopted and applied using Irfan view program, the rang of JPEG quality that used is 50-100.Depending on the calculated satellite image quality variation, the maximum number of the re-use of the JPEG lossy algorithm adopted in this study is 50 times. The image quality degradation to the JPEG quality factor and the number of re-use of the JPEG algorithm to store the satellite image is analyzed.

View Publication Preview PDF
Scopus (4)
Crossref (4)
Scopus Crossref
Publication Date
Mon Jan 01 2024
Journal Name
Lecture Notes On Data Engineering And Communications Technologies
Utilizing Deep Learning Technique for Arabic Image Captioning
...Show More Authors

View Publication
Scopus Crossref
Publication Date
Thu Nov 02 2023
Journal Name
Journal Of Engineering
An Improved Adaptive Spiral Dynamic Algorithm for Global Optimization
...Show More Authors

This paper proposes a new strategy to enhance the performance and accuracy of the Spiral dynamic algorithm (SDA) for use in solving real-world problems by hybridizing the SDA with the Bacterial Foraging optimization algorithm (BFA). The dynamic step size of SDA makes it a useful exploitation approach. However, it has limited exploration throughout the diversification phase, which results in getting trapped at local optima. The optimal initialization position for the SDA algorithm has been determined with the help of the chemotactic strategy of the BFA optimization algorithm, which has been utilized to improve the exploration approach of the SDA. The proposed Hybrid Adaptive Spiral Dynamic Bacterial Foraging (HASDBF)

... Show More
View Publication Preview PDF
Crossref
Publication Date
Sat Jan 01 2022
Journal Name
Turkish Journal Of Physiotherapy And Rehabilitation
classification coco dataset using machine learning algorithms
...Show More Authors

In this paper, we used four classification methods to classify objects and compareamong these methods, these are K Nearest Neighbor's (KNN), Stochastic Gradient Descentlearning (SGD), Logistic Regression Algorithm(LR), and Multi-Layer Perceptron (MLP). Weused MCOCO dataset for classification and detection the objects, these dataset image wererandomly divided into training and testing datasets at a ratio of 7:3, respectively. In randomlyselect training and testing dataset images, converted the color images to the gray level, thenenhancement these gray images using the histogram equalization method, resize (20 x 20) fordataset image. Principal component analysis (PCA) was used for feature extraction, andfinally apply four classification metho

... Show More
Publication Date
Sun Aug 01 2021
Journal Name
International Journal Of Electrical And Computer Engineering (ijece)
Audio compression using transforms and high order entropy encoding
...Show More Authors

<span>Digital audio is required to transmit large sizes of audio information through the most common communication systems; in turn this leads to more challenges in both storage and archieving. In this paper, an efficient audio compressive scheme is proposed, it depends on combined transform coding scheme; it is consist of i) bi-orthogonal (tab 9/7) wavelet transform to decompose the audio signal into low &amp; multi high sub-bands, ii) then the produced sub-bands passed through DCT to de-correlate the signal, iii) the product of the combined transform stage is passed through progressive hierarchical quantization, then traditional run-length encoding (RLE), iv) and finally LZW coding to generate the output mate bitstream.

... Show More
View Publication Preview PDF
Scopus (10)
Crossref (4)
Scopus Crossref
Publication Date
Sat Dec 02 2017
Journal Name
Al-khwarizmi Engineering Journal
Speech Signal Compression Using Wavelet And Linear Predictive Coding
...Show More Authors

A new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of th

... Show More
View Publication Preview PDF
Publication Date
Fri May 17 2013
Journal Name
International Journal Of Computer Applications
Applied Minimized Matrix Size Algorithm on the Transformed Images by DCT and DWT used for Image Compression
...Show More Authors

View Publication
Crossref (1)
Crossref
Publication Date
Mon Jun 22 2020
Journal Name
Baghdad Science Journal
Using Evolving Algorithms to Cryptanalysis Nonlinear Cryptosystems
...Show More Authors

            In this paper, new method have been investigated using evolving algorithms (EA's) to cryptanalysis one of the nonlinear stream cipher cryptosystems which depends on the Linear Feedback Shift Register (LFSR) unit by using cipher text-only attack. Genetic Algorithm (GA) and Ant Colony Optimization (ACO) which are used for attacking one of the nonlinear cryptosystems called "shrinking generator" using different lengths of cipher text and different lengths of combined LFSRs. GA and ACO proved their good performance in finding the initial values of the combined LFSRs. This work can be considered as a warning for a stream cipher designer to avoid the weak points, which may be f

... Show More
View Publication Preview PDF
Scopus (7)
Crossref (1)
Scopus Clarivate Crossref
Publication Date
Sat Nov 05 2016
Journal Name
Research Journal Of Applied Sciences, Engineering And Technology
Image Compression Based on Cubic Bezier Interpolation, Wavelet Transform, Polynomial Approximation, Quadtree Coding and High Order Shift Encoding
...Show More Authors

In this study, an efficient compression system is introduced, it is based on using wavelet transform and two types of 3Dimension (3D) surface representations (i.e., Cubic Bezier Interpolation (CBI)) and 1 st order polynomial approximation. Each one is applied on different scales of the image; CBI is applied on the wide area of the image in order to prune the image components that show large scale variation, while the 1 st order polynomial is applied on the small area of residue component (i.e., after subtracting the cubic Bezier from the image) in order to prune the local smoothing components and getting better compression gain. Then, the produced cubic Bezier surface is subtracted from the image signal to get the residue component. Then, t

... Show More
View Publication
Crossref (2)
Crossref
Publication Date
Thu Jul 25 2019
Journal Name
Advances In Intelligent Systems And Computing
Solving Game Theory Problems Using Linear Programming and Genetic Algorithms
...Show More Authors

View Publication
Scopus (18)
Crossref (12)
Scopus Crossref