Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eyes' observation of the different colors and features of images. We propose a multi-layer hybrid system for deep learning using the unsupervised CAE architecture and using the color clustering of the K-mean algorithm to compress images and determine their size and color intensity. The system is implemented using Kodak and Challenge on Learned Image Compression (CLIC) dataset for deep learning. Experimental results show that our proposed method is superior to the traditional compression methods of the autoencoder, and the proposed work has better performance in terms of performance speed and quality measures Peak Signal To Noise Ratio (PSNR) and Structural Similarity Index (SSIM) where the results achieved better performance and high efficiency With high compression bit rates and low Mean Squared Error (MSE) rate the results recorded the highest compression ratios that ranged between (0.7117 to 0.8707) for the Kodak dataset and (0.7191 to 0.9930) for CLIC dataset. The system achieved high accuracy and quality in comparison to the error coefficient, which was recorded (0.0126 to reach 0.0003) below, and this system is onsidered the most quality and accurate compared to the methods of deep learning compared to the deep learning methods of the autoencoder
Statistical learning theory serves as the foundational bedrock of Machine learning (ML), which in turn represents the backbone of artificial intelligence, ushering in innovative solutions for real-world challenges. Its origins can be linked to the point where statistics and the field of computing meet, evolving into a distinct scientific discipline. Machine learning can be distinguished by its fundamental branches, encompassing supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning. Within this tapestry, supervised learning takes center stage, divided in two fundamental forms: classification and regression. Regression is tailored for continuous outcomes, while classification specializes in c
... Show MoreIn this work a hybrid composite materials were prepared containing matrix of polymer (polyethylene PE) reinforced by different reinforcing materials (Alumina powder + Carbon black powder CB + Silica powder). The hybrid composite materials prepared are: • H1 = PE + Al2O3 + CB • H2 = PE + CB + SiO2 • H3 = PE + Al2O3 + CB + SiO2 All samples related to electrical tests were prepared by injection molding process. Mechanical tests include compression with different temperatures and different chemical solutions at different immersion times The mechanical experimentations results were in favour of the samples (H3) with an obvious weakness of the samples (H1) and a decrease of these properties with a rise in temperature and the increasing
... Show MoreIn recent years, Bitcoin has become the most widely used blockchain platform in business and finance. The goal of this work is to find a viable prediction model that incorporates and perhaps improves on a combination of available models. Among the techniques utilized in this paper are exponential smoothing, ARIMA, artificial neural networks (ANNs) models, and prediction combination models. The study's most obvious discovery is that artificial intelligence models improve the results of compound prediction models. The second key discovery was that a strong combination forecasting model that responds to the multiple fluctuations that occur in the bitcoin time series and Error improvement should be used. Based on the results, the prediction acc
... Show More. In recent years, Bitcoin has become the most widely used blockchain platform in business and finance. The goal of this work is to find a viable prediction model that incorporates and perhaps improves on a combination of available models. Among the techniques utilized in this paper are exponential smoothing, ARIMA, artificial neural networks (ANNs) models, and prediction combination models. The study's most obvious discovery is that artificial intelligence models improve the results of compound prediction models. The second key discovery was that a strong combination forecasting model that responds to the multiple fluctuations that occur in the bitcoin time series and Error improvement should be used. Based on the results, the prediction a
... Show MoreThe concealment of data has emerged as an area of deep and wide interest in research that endeavours to conceal data in a covert and stealth manner, to avoid detection through the embedment of the secret data into cover images that appear inconspicuous. These cover images may be in the format of images or videos used for concealment of the messages, yet still retaining the quality visually. Over the past ten years, there have been numerous researches on varying steganographic methods related to images, that emphasised on payload and the quality of the image. Nevertheless, a compromise exists between the two indicators and to mediate a more favourable reconciliation for this duo is a daunting and problematic task. Additionally, the current
... Show MoreThis paper is concerned with pre-test single and double stage shrunken estimators for the mean (?) of normal distribution when a prior estimate (?0) of the actule value (?) is available, using specifying shrinkage weight factors ?(?) as well as pre-test region (R). Expressions for the Bias [B(?)], mean squared error [MSE(?)], Efficiency [EFF(?)] and Expected sample size [E(n/?)] of proposed estimators are derived. Numerical results and conclusions are drawn about selection different constants included in these expressions. Comparisons between suggested estimators, with respect to classical estimators in the sense of Bias and Relative Efficiency, are given. Furthermore, comparisons with the earlier existing works are drawn.
This paper presents an experimental and numerical study which was carried out to examine the influence of the size and the layout of the web openings on the load carrying capacity and the serviceability of reinforced concrete deep beams. Five full-scale simply supported reinforced concrete deep beams with two large web openings created in shear regions were tested up to failure. The shear span to overall depth ratio was (1.1). Square openings were located symmetrically relative to the midspan section either at the midpoint or at the interior boundaries of the shear span. Two different side dimensions for the square openings were considered, mainly, (200) mm and (230) mm. The strength results proved that the shear capacity of the dee
... Show MoreThis paper presents an experimental and numerical study which was carried out to examine the influence of the size and the layout of the web openings on the load carrying capacity and the serviceability of reinforced concrete deep beams. Five full-scale simply supported reinforced concrete deep beams with two large web openings created in shear regions were tested up to failure. The shear span to overall depth ratio was (1.1). Square openings were located symmetrically relative to the midspan section either at the midpoint or at the interior boundaries of the shear span. Two different side dimensions for the square openings were considered, mainly, (200) mm and (230) mm. The strength results proved that the shear capacity of the dee
... Show MoreThe nuclear charge density distributions, form factors andcorresponding proton, charge, neutron, and matter root mean squareradii for stable 4He, 12C, and 16O nuclei have been calculated usingsingle-particle radial wave functions of Woods-Saxon potential andharmonic-oscillator potential for comparison. The calculations for theground charge density distributions using the Woods-Saxon potentialshow good agreement with experimental data for 4He nucleus whilethe results for 12C and 16O nuclei are better in harmonic-oscillatorpotential. The calculated elastic charge form factors in Woods-Saxonpotential are better than the results of harmonic-oscillator potential.Finally, the calculated root mean square radii usingWoods-Saxonpotentials ho
... Show More