Preferred Language
Articles
/
iRe5Po8BVTCNdQwCz2Wy
Lossy Image Compression Using Hybrid Deep Learning Autoencoder Based On kmean Clusteri
...Show More Authors

Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eyes' observation of the different colors and features of images. We propose a multi-layer hybrid system for deep learning using the unsupervised CAE architecture and using the color clustering of the K-mean algorithm to compress images and determine their size and color intensity. The system is implemented using Kodak and Challenge on Learned Image Compression (CLIC) dataset for deep learning. Experimental results show that our proposed method is superior to the traditional compression methods of the autoencoder, and the proposed work has better performance in terms of performance speed and quality measures Peak Signal To Noise Ratio (PSNR) and Structural Similarity Index (SSIM) where the results achieved better performance and high efficiency With high compression bit rates and low Mean Squared Error (MSE) rate the results recorded the highest compression ratios that ranged between (0.7117 to 0.8707) for the Kodak dataset and (0.7191 to 0.9930) for CLIC dataset. The system achieved high accuracy and quality in comparison to the error coefficient, which was recorded (0.0126 to reach 0.0003) below, and this system is onsidered the most quality and accurate compared to the methods of deep learning compared to the deep learning methods of the autoencoder

Publication Date
Fri Apr 15 2016
Journal Name
International Journal Of Computer Applications
Hybrid Techniques based Speech Recognition
...Show More Authors

Information processing has an important application which is speech recognition. In this paper, a two hybrid techniques have been presented. The first one is a 3-level hybrid of Stationary Wavelet Transform (S) and Discrete Wavelet Transform (W) and the second one is a 3-level hybrid of Discrete Wavelet Transform (W) and Multi-wavelet Transforms (M). To choose the best 3-level hybrid in each technique, a comparison according to five factors has been implemented and the best results are WWS, WWW, and MWM. Speech recognition is performed on WWS, WWW, and MWM using Euclidean distance (Ecl) and Dynamic Time Warping (DTW). The match performance is (98%) using DTW in MWM, while in the WWS and WWW are (74%) and (78%) respectively, but when using (

... Show More
View Publication
Crossref
Publication Date
Wed Apr 10 2019
Journal Name
Engineering, Technology & Applied Science Research
Content Based Image Clustering Technique Using Statistical Features and Genetic Algorithm
...Show More Authors

Text based-image clustering (TBIC) is an insufficient approach for clustering related web images. It is a challenging task to abstract the visual features of images with the support of textual information in a database. In content-based image clustering (CBIC), image data are clustered on the foundation of specific features like texture, colors, boundaries, shapes. In this paper, an effective CBIC) technique is presented, which uses texture and statistical features of the images. The statistical features or moments of colors (mean, skewness, standard deviation, kurtosis, and variance) are extracted from the images. These features are collected in a one dimension array, and then genetic algorithm (GA) is applied for image clustering.

... Show More
View Publication
Scopus (9)
Crossref (5)
Scopus Crossref
Publication Date
Tue Apr 30 2024
Journal Name
International Journal On Technical And Physical Problems Of Engineering
Deep Learning Techniques For Skull Stripping of Brain MR Images
...Show More Authors

Deep Learning Techniques For Skull Stripping of Brain MR Images

Scopus (1)
Scopus
Publication Date
Mon Jan 01 2024
Journal Name
Aip Conference Proceedings
Comparative analysis of deep learning techniques for lung cancer identification
...Show More Authors

One of the diseases on a global scale that causes the main reasons of death is lung cancer. It is considered one of the most lethal diseases in life. Early detection and diagnosis are essential for lung cancer and will provide effective therapy and achieve better outcomes for patients; in recent years, algorithms of Deep Learning have demonstrated crucial promise for their use in medical imaging analysis, especially in lung cancer identification. This paper includes a comparison between a number of different Deep Learning techniques-based models using Computed Tomograph image datasets with traditional Convolution Neural Networks and SequeezeNet models using X-ray data for the automated diagnosis of lung cancer. Although the simple details p

... Show More
View Publication
Scopus (1)
Scopus Crossref
Publication Date
Thu Jun 01 2023
Journal Name
International Journal Of Electrical And Computer Engineering (ijece)
An optimized deep learning model for optical character recognition applications
...Show More Authors

The convolutional neural networks (CNN) are among the most utilized neural networks in various applications, including deep learning. In recent years, the continuing extension of CNN into increasingly complicated domains has made its training process more difficult. Thus, researchers adopted optimized hybrid algorithms to address this problem. In this work, a novel chaotic black hole algorithm-based approach was created for the training of CNN to optimize its performance via avoidance of entrapment in the local minima. The logistic chaotic map was used to initialize the population instead of using the uniform distribution. The proposed training algorithm was developed based on a specific benchmark problem for optical character recog

... Show More
View Publication
Scopus (2)
Scopus Crossref
Publication Date
Wed Mar 16 2022
Journal Name
International Journal Of Recent Contributions From Engineering, Science & It
Smart Learning based on Moodle E-learning Platform and Digital Skills for University Students
...Show More Authors

Publication Date
Sat Apr 01 2023
Journal Name
International Journal Of Electrical And Computer Engineering (ijece)
Image encryption algorithm based on the density and 6D logistic map
...Show More Authors

One of the most difficult issues in the history of communication technology is the transmission of secure images. On the internet, photos are used and shared by millions of individuals for both private and business reasons. Utilizing encryption methods to change the original image into an unintelligible or scrambled version is one way to achieve safe image transfer over the network. Cryptographic approaches based on chaotic logistic theory provide several new and promising options for developing secure Image encryption methods. The main aim of this paper is to build a secure system for encrypting gray and color images. The proposed system consists of two stages, the first stage is the encryption process, in which the keys are genera

... Show More
View Publication
Scopus (22)
Crossref (10)
Scopus Crossref
Publication Date
Sun May 01 2016
Journal Name
Iraqi Journal Of Science
Efficient text in image hiding method based on LSB method principle
...Show More Authors

The steganography (text in image hiding) methods still considered important issues to the researchers at the present time. The steganography methods were varied in its hiding styles from a simple to complex techniques that are resistant to potential attacks. In current research the attack on the host's secret text problem didn’t considered, but an improved text hiding within the image have highly confidential was proposed and implemented companied with a strong password method, so as to ensure no change will be made in the pixel values of the host image after text hiding. The phrase “highly confidential” denoted to the low suspicious it has been performed may be found in the covered image. The Experimental results show that the covere

... Show More
View Publication
Publication Date
Tue Dec 05 2023
Journal Name
Baghdad Science Journal
Robust Color Image Encryption Scheme Based on RSA via DCT by Using an Advanced Logic Design Approach
...Show More Authors

Information security in data storage and transmission is increasingly important. On the other hand, images are used in many procedures. Therefore, preventing unauthorized access to image data is crucial by encrypting images to protect sensitive data or privacy. The methods and algorithms for masking or encoding images vary from simple spatial-domain methods to frequency-domain methods, which are the most complex and reliable. In this paper, a new cryptographic system based on the random key generator hybridization methodology by taking advantage of the properties of Discrete Cosine Transform (DCT) to generate an indefinite set of random keys and taking advantage of the low-frequency region coefficients after the DCT stage to pass them to

... Show More
View Publication Preview PDF
Scopus (6)
Crossref (1)
Scopus Crossref
Publication Date
Fri Apr 14 2023
Journal Name
Journal Of Big Data
A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications
...Show More Authors
Abstract<p>Data scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for</p> ... Show More
View Publication Preview PDF
Scopus (506)
Crossref (500)
Scopus Clarivate Crossref