Preferred Language
Articles
/
iRe5Po8BVTCNdQwCz2Wy
Lossy Image Compression Using Hybrid Deep Learning Autoencoder Based On kmean Clusteri
...Show More Authors

Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eyes' observation of the different colors and features of images. We propose a multi-layer hybrid system for deep learning using the unsupervised CAE architecture and using the color clustering of the K-mean algorithm to compress images and determine their size and color intensity. The system is implemented using Kodak and Challenge on Learned Image Compression (CLIC) dataset for deep learning. Experimental results show that our proposed method is superior to the traditional compression methods of the autoencoder, and the proposed work has better performance in terms of performance speed and quality measures Peak Signal To Noise Ratio (PSNR) and Structural Similarity Index (SSIM) where the results achieved better performance and high efficiency With high compression bit rates and low Mean Squared Error (MSE) rate the results recorded the highest compression ratios that ranged between (0.7117 to 0.8707) for the Kodak dataset and (0.7191 to 0.9930) for CLIC dataset. The system achieved high accuracy and quality in comparison to the error coefficient, which was recorded (0.0126 to reach 0.0003) below, and this system is onsidered the most quality and accurate compared to the methods of deep learning compared to the deep learning methods of the autoencoder

Publication Date
Tue Aug 14 2018
Journal Name
International Journal Of Engineering & Technology
Hybrid DWT-DCT compression algorithm & a new flipping block with an adaptive RLE method for high medical image compression ratio
...Show More Authors

Huge number of medical images are generated and needs for more storage capacity and bandwidth for transferring over the networks. Hybrid DWT-DCT compression algorithm is applied to compress the medical images by exploiting the features of both techniques. Discrete Wavelet Transform (DWT) coding is applied to image YCbCr color model which decompose image bands into four subbands (LL, HL, LH and HH). The LL subband is transformed into low and high frequency components using Discrete Cosine Transform (DCT) to be quantize by scalar quantization that was applied on all image bands, the quantization parameters where reduced by half for the luminance band while it is the same for the chrominance bands to preserve the image quality, the zig

... Show More
View Publication
Crossref
Publication Date
Fri Aug 12 2022
Journal Name
Future Internet
Improved DDoS Detection Utilizing Deep Neural Networks and Feedforward Neural Networks as Autoencoder
...Show More Authors

Software-defined networking (SDN) is an innovative network paradigm, offering substantial control of network operation through a network’s architecture. SDN is an ideal platform for implementing projects involving distributed applications, security solutions, and decentralized network administration in a multitenant data center environment due to its programmability. As its usage rapidly expands, network security threats are becoming more frequent, leading SDN security to be of significant concern. Machine-learning (ML) techniques for intrusion detection of DDoS attacks in SDN networks utilize standard datasets and fail to cover all classification aspects, resulting in under-coverage of attack diversity. This paper proposes a hybr

... Show More
View Publication Preview PDF
Scopus (32)
Crossref (24)
Scopus Clarivate Crossref
Publication Date
Wed Jul 29 2020
Journal Name
Iraqi Journal Of Science
Fractal Image Compression Using Block Indexing Technique: A Review
...Show More Authors

Fractal image compression depends on representing an image using affine transformations. The main concern for researches in the discipline of fractal image compression (FIC) algorithm is to decrease encoding time needed to compress image data. The basic technique is that each portion of the image is similar to other portions of the same image. In this process, there are many models that were developed. The presence of fractals was initially noticed and handled using Iterated Function System (IFS); that is used for encoding images. In this paper, a review of fractal image compression is discussed with its variants along with other techniques. A summarized review of contributions is achieved to determine the fulfillment of fractal ima

... Show More
Preview PDF
Crossref (1)
Crossref
Publication Date
Wed Mar 23 2011
Journal Name
Ibn Al- Haitham J. For Pure & Appl. Sci.
Image Compression Using Proposed Enhanced Run Length Encoding Algorithm
...Show More Authors

In this paper, we will present proposed enhance process of image compression by using RLE algorithm. This proposed yield to decrease the size of compressing image, but the original method used primarily for compressing a binary images [1].Which will yield increasing the size of an original image mostly when used for color images. The test of an enhanced algorithm is performed on sample consists of ten BMP 24-bit true color images, building an application by using visual basic 6.0 to show the size after and before compression process and computing the compression ratio for RLE and for the enhanced RLE algorithm.

Preview PDF
Publication Date
Sun Jan 30 2022
Journal Name
Iraqi Journal Of Science
A Survey on Arabic Text Classification Using Deep and Machine Learning Algorithms
...Show More Authors

    Text categorization refers to the process of grouping text or documents into classes or categories according to their content. Text categorization process consists of three phases which are: preprocessing, feature extraction and classification. In comparison to the English language, just few studies have been done to categorize and classify the Arabic language. For a variety of applications, such as text classification and clustering, Arabic text representation is a difficult task because Arabic language is noted for its richness, diversity, and complicated morphology. This paper presents a comprehensive analysis and a comparison for researchers in the last five years based on the dataset, year, algorithms and the accuracy th

... Show More
Scopus (14)
Crossref (4)
Scopus Crossref
Publication Date
Fri Sep 01 2023
Journal Name
Journal Of Engineering
Iraqi Sentiment and Emotion Analysis Using Deep Learning
...Show More Authors

Analyzing sentiment and emotions in Arabic texts on social networking sites has gained wide interest from researchers. It has been an active research topic in recent years due to its importance in analyzing reviewers' opinions. The Iraqi dialect is one of the Arabic dialects used in social networking sites, characterized by its complexity and, therefore, the difficulty of analyzing sentiment. This work presents a hybrid deep learning model consisting of a Convolution Neural Network (CNN) and the Gated Recurrent Units (GRU) to analyze sentiment and emotions in Iraqi texts. Three Iraqi datasets (Iraqi Arab Emotions Data Set (IAEDS), Annotated Corpus of Mesopotamian-Iraqi Dialect (ACMID), and Iraqi Arabic Dataset (IAD)) col

... Show More
View Publication Preview PDF
Crossref (4)
Crossref
Publication Date
Wed Jul 25 2018
Journal Name
International Journal Of Engineering Trends And Technology
Polynomial Color Image Compression
...Show More Authors

View Publication
Crossref (1)
Crossref
Publication Date
Sat Apr 15 2023
Journal Name
Journal Of Robotics
A New Proposed Hybrid Learning Approach with Features for Extraction of Image Classification
...Show More Authors

Image classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class

... Show More
View Publication
Scopus (5)
Crossref (4)
Scopus Clarivate Crossref
Publication Date
Fri May 17 2013
Journal Name
International Journal Of Computer Applications
Fast Lossless Compression of Medical Images based on Polynomial
...Show More Authors

In this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.

View Publication Preview PDF
Crossref (7)
Crossref
Publication Date
Mon Jan 01 2024
Journal Name
Computers, Materials & Continua
Credit Card Fraud Detection Using Improved Deep Learning Models
...Show More Authors

View Publication
Scopus (10)
Crossref (7)
Scopus Clarivate Crossref