Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eyes' observation of the different colors and features of images. We propose a multi-layer hybrid system for deep learning using the unsupervised CAE architecture and using the color clustering of the K-mean algorithm to compress images and determine their size and color intensity. The system is implemented using Kodak and Challenge on Learned Image Compression (CLIC) dataset for deep learning. Experimental results show that our proposed method is superior to the traditional compression methods of the autoencoder, and the proposed work has better performance in terms of performance speed and quality measures Peak Signal To Noise Ratio (PSNR) and Structural Similarity Index (SSIM) where the results achieved better performance and high efficiency With high compression bit rates and low Mean Squared Error (MSE) rate the results recorded the highest compression ratios that ranged between (0.7117 to 0.8707) for the Kodak dataset and (0.7191 to 0.9930) for CLIC dataset. The system achieved high accuracy and quality in comparison to the error coefficient, which was recorded (0.0126 to reach 0.0003) below, and this system is onsidered the most quality and accurate compared to the methods of deep learning compared to the deep learning methods of the autoencoder
Merging biometrics with cryptography has become more familiar and a great scientific field was born for researchers. Biometrics adds distinctive property to the security systems, due biometrics is unique and individual features for every person. In this study, a new method is presented for ciphering data based on fingerprint features. This research is done by addressing plaintext message based on positions of extracted minutiae from fingerprint into a generated random text file regardless the size of data. The proposed method can be explained in three scenarios. In the first scenario the message was used inside random text directly at positions of minutiae in the second scenario the message was encrypted with a choosen word before ciphering
... Show MoreIn this paper an algorithm for Steganography using DCT for cover image and DWT for hidden image with an embedding order key is proposed. For more security and complexity the cover image convert from RGB to YIQ, Y plane is used and divided into four equally parts and then converted to DCT domain. The four coefficient of the DWT of the hidden image are embedded into each part of cover DCT, the embedding order based on the order key of which is stored with cover in a database table in both the sender and receiver sender. Experimental results show that the proposed algorithm gets successful hiding information into the cover image. We use Microsoft Office Access 2003 database as DBMS, the hiding, extracting algo
... Show MoreThis study aims to enhance the RC5 algorithm to improve encryption and decryption speeds in devices with limited power and memory resources. These resource-constrained applications, which range in size from wearables and smart cards to microscopic sensors, frequently function in settings where traditional cryptographic techniques because of their high computational overhead and memory requirements are impracticable. The Enhanced RC5 (ERC5) algorithm integrates the PKCS#7 padding method to effectively adapt to various data sizes. Empirical investigation reveals significant improvements in encryption speed with ERC5, ranging from 50.90% to 64.18% for audio files and 46.97% to 56.84% for image files, depending on file size. A substanti
... Show MoreImitation learning is an effective method for training an autonomous agent to accomplish a task by imitating expert behaviors in their demonstrations. However, traditional imitation learning methods require a large number of expert demonstrations in order to learn a complex behavior. Such a disadvantage has limited the potential of imitation learning in complex tasks where the expert demonstrations are not sufficient. In order to address the problem, we propose a Generative Adversarial Network-based model which is designed to learn optimal policies using only a single demonstration. The proposed model is evaluated on two simulated tasks in comparison with other methods. The results show that our proposed model is capable of completing co
... Show MoreIt is well known that petroleum refineries are considered the largest generator of oily sludge which may cause serious threats to the environment if disposed of without treatment. Throughout the present research, it can be said that a hybrid process including ultrasonic treatment coupled with froth floatation has been shown as a green efficient treatment of oily sludge waste from the bottom of crude oil tanks in Al-Daura refinery and able to get high yield of base oil recovery which is 65% at the optimum operating conditions (treatment time = 30 min, ultrasonic wave amplitude = 60 micron, and (solvent: oily sludge) ratio = 4). Experimental results showed that 83% of the solvent used was recovered meanwhile the main water
... Show MoreThis study presents an investigation about the effect of fire flame on the punching shear strength of hybrid fiber reinforced concrete flat plates. The main considered parameters are the fiber type (steel or glass) and the burning steady-state temperatures (500 and 600°C). A total of 9 half-scale flat plate specimens of dimensions 1500mm×1500mm×100mm and 1.5% fiber volume fraction were cast and divided into 3 groups. Each group consisted of 3 specimens that were identical to those in the other groups. The specimens of the second and the third groups were subjected to fire flame influence for 1 hour and steady-state temperature of 500 and 600°C respectively. Regarding the cooling process, water sprinkling was applied directly aft
... Show MorePreparation of epoxy/ TiO2 and epoxy/ Al2O3 nanocomposites is studed and investigated in this paper. The nano composites are processed by different nano fillers concentrations (0, 0.01, 0.02 ,0.03, 0.04 ,0.05 ,0.07 and 0.1 wt%). The particles sized of TiO2,Al2O3 are about 20–50 nm.Epoxy resin and nano composites containing different shape nano fillers of (TiO2:Al2O3 composites),are shear mixing with ratio 1 to 1,with different nano hybrid fillers concentrations( 0.025 ,0.0 5 ,0.15 ,0.2, and 0.25 wt%) to Preparation of epoxy/ TiO2- Al2O3 hybrid composites. The mechanical properties of nanocomposites such as bending ,wearing, and fatigue are investigated as mechanical properties.
AO Dr. Ali Jihad, Journal of Physical Education, 2021
In the latest years there has been a profound evolution in computer science and technology, which incorporated several fields. Under this evolution, Content Base Image Retrieval (CBIR) is among the image processing field. There are several image retrieval methods that can easily extract feature as a result of the image retrieval methods’ progresses. To the researchers, finding resourceful image retrieval devices has therefore become an extensive area of concern. Image retrieval technique refers to a system used to search and retrieve images from digital images’ huge database. In this paper, the author focuses on recommendation of a fresh method for retrieving image. For multi presentation of image in Convolutional Neural Network (CNN),
... Show MoreThe present work aims to study the effect of using an automatic thresholding technique to convert the features edges of the images to binary images in order to split the object from its background, where the features edges of the sampled images obtained from first-order edge detection operators (Roberts, Prewitt and Sobel) and second-order edge detection operators (Laplacian operators). The optimum automatic threshold are calculated using fast Otsu method. The study is applied on a personal image (Roben) and a satellite image to study the compatibility of this procedure with two different kinds of images. The obtained results are discussed.