Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eyes' observation of the different colors and features of images. We propose a multi-layer hybrid system for deep learning using the unsupervised CAE architecture and using the color clustering of the K-mean algorithm to compress images and determine their size and color intensity. The system is implemented using Kodak and Challenge on Learned Image Compression (CLIC) dataset for deep learning. Experimental results show that our proposed method is superior to the traditional compression methods of the autoencoder, and the proposed work has better performance in terms of performance speed and quality measures Peak Signal To Noise Ratio (PSNR) and Structural Similarity Index (SSIM) where the results achieved better performance and high efficiency With high compression bit rates and low Mean Squared Error (MSE) rate the results recorded the highest compression ratios that ranged between (0.7117 to 0.8707) for the Kodak dataset and (0.7191 to 0.9930) for CLIC dataset. The system achieved high accuracy and quality in comparison to the error coefficient, which was recorded (0.0126 to reach 0.0003) below, and this system is onsidered the most quality and accurate compared to the methods of deep learning compared to the deep learning methods of the autoencoder
Feature selection (FS) constitutes a series of processes used to decide which relevant features/attributes to include and which irrelevant features to exclude for predictive modeling. It is a crucial task that aids machine learning classifiers in reducing error rates, computation time, overfitting, and improving classification accuracy. It has demonstrated its efficacy in myriads of domains, ranging from its use for text classification (TC), text mining, and image recognition. While there are many traditional FS methods, recent research efforts have been devoted to applying metaheuristic algorithms as FS techniques for the TC task. However, there are few literature reviews concerning TC. Therefore, a comprehensive overview was systematicall
... Show MoreNGC 6946 have been observed with BVRI filters, on October 15-18,
2012, with the Newtonian focus of the 1.88m telescope, Kottamia
observatory, of the National Research Institute of Astronomy and
Geophysics, Egypt (NRIAG), then we combine the BVRI filters to
obtain an astronomical image to the spiral galaxy NGC 6946 which
is regarded main source of information to discover the components of
this galaxy, where galaxies are considered the essential element of
the universe. To know the components of NGC 6946, we studied it
with the Variable Precision Rough Sets technique to determine the
contribution of the Bulge, disk, and arms of NGC 6946 according to
different color in the image. From image we can determined th
In this research,we are studied impact strength, bending and compression strength of composites including the epoxy resin as a matrix , with gawaian red wood flour ,Russian white wood flour ,glass powder and rock wool fibers as reinforcement materials with volume fraction (20%) for all samples,and compared them in different conditions of temperatures. The results have shown that the impact strength increased with the reinforcement with (particles and fibers),and at high temperatures for all samples prepared,and also observed an increase in elasticity coefficient of epoxy composites filled with (different particles) and decreased in elasticity coefficient of epoxy com
... Show MoreThe research aimed at designing teaching program using jigsaw in learning spiking in volleyball as well as identifying the effect of these exercises on learning spring in volleyball. The researchers used the experimental method on (25) students as experimental group and (27) students as controlling group and (15) students as pilot study group. The researchers conducted spiking tests then the data was collected and treated using proper statistical operations to conclude that the strategy have a positive effect in experimental group. Finally, the researchers recommended using the strategy in making similar studies on other subjects and skills.
A novel median filter based on crow optimization algorithms (OMF) is suggested to reduce the random salt and pepper noise and improve the quality of the RGB-colored and gray images. The fundamental idea of the approach is that first, the crow optimization algorithm detects noise pixels, and that replacing them with an optimum median value depending on a criterion of maximization fitness function. Finally, the standard measure peak signal-to-noise ratio (PSNR), Structural Similarity, absolute square error and mean square error have been used to test the performance of suggested filters (original and improved median filter) used to removed noise from images. It achieves the simulation based on MATLAB R2019b and the resul
... Show Moreorder to increase the level of security, as this system encrypts the secret image before sending it through the internet to the recipient (by the Blowfish method). As The Blowfish method is known for its efficient security; nevertheless, the encrypting time is long. In this research we try to apply the smoothing filter on the secret image which decreases its size and consequently the encrypting and decrypting time are decreased. The secret image is hidden after encrypting it into another image called the cover image, by the use of one of these two methods" Two-LSB" or" Hiding most bits in blue pixels". Eventually we compare the results of the two methods to determine which one is better to be used according to the PSNR measurs
Most studies on deep beams have been made with reinforced concrete deep beams, only a few studies investigate the response of prestressed deep beams, while, to the best of our knowledge, there is not a study that investigates the response of full scale (T-section) prestressed deep beams with large web openings. An experimental and numerical study was conducted in order to investigate the shear strength of ordinary reinforced and partially prestressed full scale (T-section) deep beams that contain large web openings in order to investigate the prestressing existence effects on the deep beam responses and to better understand the effects of prestressing locations and opening depth to beam depth ratio on the deep beam performance and b
... Show MoreThis article co;nsiders a shrunken estimator ·Of Al-Hermyari· and
AI Gobuii (.1) to estimate the mean (8) of a normal clistributicm N (8 cr4) with known variance (cr+), when <:I guess value (So) av11il ble about the mean (B) as· an initial estrmate. This estimator is shown to be
more efficient tl1an the class-ical estimators especially when 8 is close to 8•. General expressions .for bias and MSE -of considered estitnator are gi 'en, witeh some examples. Nut.nerical cresdlts, comparisons and
conclusions ate reported.