Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eyes' observation of the different colors and features of images. We propose a multi-layer hybrid system for deep learning using the unsupervised CAE architecture and using the color clustering of the K-mean algorithm to compress images and determine their size and color intensity. The system is implemented using Kodak and Challenge on Learned Image Compression (CLIC) dataset for deep learning. Experimental results show that our proposed method is superior to the traditional compression methods of the autoencoder, and the proposed work has better performance in terms of performance speed and quality measures Peak Signal To Noise Ratio (PSNR) and Structural Similarity Index (SSIM) where the results achieved better performance and high efficiency With high compression bit rates and low Mean Squared Error (MSE) rate the results recorded the highest compression ratios that ranged between (0.7117 to 0.8707) for the Kodak dataset and (0.7191 to 0.9930) for CLIC dataset. The system achieved high accuracy and quality in comparison to the error coefficient, which was recorded (0.0126 to reach 0.0003) below, and this system is onsidered the most quality and accurate compared to the methods of deep learning compared to the deep learning methods of the autoencoder
A comparative study was done on the adsorption of methyl orange dye (MO) using non-activated and activated corn leaves with hydrochloric acid as an adsorbent material. Scanning electron microscopy (SEM) and Fourier Transform Infrared spectroscopy (FTIR) were utilized to specify the properties of adsorbent material. The effect of several variables (pH, initial dye concentration, temperature, amount of adsorbent and contact time) on the removal efficiency was studied and the results indicated that the adsorption efficiency increases with the increase in the concentration of dye, adsorbent dosage and contact time, while inversely proportional to the increase in pH and temperature for both the treated and untreated corn leav
... Show MoreThe radial wave functions of the cosh potential within the three-body model of (Core+ 2n) have been employed to investigate the ground state properties such as the proton, neutron and matter densities and the associated rms radii of neutron-rich 6He, 11Li, 14Be, and 17B exotic nuclei. The density distributions of the core and two valence (halo) neutrons are described by the radial wave functions of the cosh potential. The obtained results provide the halo structure of the above exotic nuclei. Elastic electron scattering form factors of these halo nuclei are studied by the plane-wave Born approximation.
The Nano materials play a very important role in the heat transfer enhancement. An experimental investigation has been done to understand the behaviors of nano and micro materials on critical heat flux. Pool boiling experiments have used for several concentrations of nano and micro particles on a 0.4 mm diameter nickel chrome (Ni-Cr) wire heater which is heated electrically at atmospheric pressure. Zinc oxide(ZnO) and silica(SiO2) were used as a nano and micro fluids with concentrations (0.01,0.05,0.1,0.3,0.5,1 g/L), a marked enhancement in CHF have been shown in the results for nano and micro fluids for different concentrations compared to distilled water. The deposition of the nano particles on the heater surface was the rea
... Show MoreA space X is named a πp – normal if for each closed set F and each π – closed set F’ in X with F ∩ F’ = ∅, there are p – open sets U and V of X with U ∩ V = ∅ whereas F ⊆ U and F’ ⊆ V. Our work studies and discusses a new kind of normality in generalized topological spaces. We define ϑπp – normal, ϑ–mildly normal, & ϑ–almost normal, ϑp– normal, & ϑ–mildly p–normal, & ϑ–almost p-normal and ϑπ-normal space, and we discuss some of their properties.
Long before the pandemic, labour force all over the world was facing the quest of incertitude, which is normal and inherent of the market, but the extent of this quest was shaped by the pace of acceleration of technological progress, which became exponential in the last ten years, from 2010 to 2020. Robotic process automation, work remote, computer science, electronic and communications, mechanical engineering, information technology digitalisation o public administration and so one are ones of the pillars of the future of work. Some authors even stated that without robotic process automation (RPA) included in technological processes, companies will not be able to sustain a competitive level on the market (Madakan et al, 2018). R
... Show MoreThroughout this paper R represents commutative ring with identity and M is a unitary left R-module. The purpose of this paper is to investigate some new results (up to our knowledge) on the concept of weak essential submodules which introduced by Muna A. Ahmed, where a submodule N of an R-module M is called weak essential, if N ? P ? (0) for each nonzero semiprime submodule P of M. In this paper we rewrite this definition in another formula. Some new definitions are introduced and various properties of weak essential submodules are considered.
Interval methods for verified integration of initial value problems (IVPs) for ODEs have been used for more than 40 years. For many classes of IVPs, these methods have the ability to compute guaranteed error bounds for the flow of an ODE, where traditional methods provide only approximations to a solution. Overestimation, however, is a potential drawback of verified methods. For some problems, the computed error bounds become overly pessimistic, or integration even breaks down. The dependency problem and the wrapping effect are particular sources of overestimations in interval computations. Berz (see [1]) and his co-workers have developed Taylor model methods, which extend interval arithmetic with symbolic computations. The latter is an ef
... Show MoreIn this paper is to introduce the concept of hyper AT-algebras is a generalization of AT-algebras and study a hyper structure AT-algebra and investigate some of its properties. “Also, hyper AT-subalgebras and hyper AT-ideal of hyper AT-algebras are studied. We study on the fuzzy theory of hyper AT-ideal of hyper AT-algebras hyper AT-algebra”. “We study homomorphism of hyper AT-algebras which are a common generalization of AT-algebras.
We dealt with the nature of the points under the influence of periodic function chaotic functions associated functions chaotic and sufficient conditions to be a very chaotic functions Palace