Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eyes' observation of the different colors and features of images. We propose a multi-layer hybrid system for deep learning using the unsupervised CAE architecture and using the color clustering of the K-mean algorithm to compress images and determine their size and color intensity. The system is implemented using Kodak and Challenge on Learned Image Compression (CLIC) dataset for deep learning. Experimental results show that our proposed method is superior to the traditional compression methods of the autoencoder, and the proposed work has better performance in terms of performance speed and quality measures Peak Signal To Noise Ratio (PSNR) and Structural Similarity Index (SSIM) where the results achieved better performance and high efficiency With high compression bit rates and low Mean Squared Error (MSE) rate the results recorded the highest compression ratios that ranged between (0.7117 to 0.8707) for the Kodak dataset and (0.7191 to 0.9930) for CLIC dataset. The system achieved high accuracy and quality in comparison to the error coefficient, which was recorded (0.0126 to reach 0.0003) below, and this system is onsidered the most quality and accurate compared to the methods of deep learning compared to the deep learning methods of the autoencoder
The ground state densities of unstable neutron-rich 11Li and 12Be exotic nuclei are studied in the framework of the binary cluster model (BCM). The internal densities of the clusters are described by the single particle harmonic oscillator wave functions. The long tail performance is clearly noticed in the calculated neutron and matter density distributions of these nuclei. The structures of the two valence neutrons in 11Li and 12Be are found to be mixed configurations with dominant (1p1/2)2. Elastic electron scattering proton form factors for 11Li and 12Be are studied using the plane wave Born approximation (PWBA). It is found that the major difference between the calculated form factors of unstable nuclei [11Li, 12Be] and those of stab
... Show MoreIn this work, the calculation of matter density distributions, elastic charge form factors and size radii for halo 11Be, 19C and 11Li nuclei are calculated. Each nuclide under study are divided into two parts; one for core part and the second for halo part. The core part are studied using harmonic-oscillator radial wave functions, while the halo part are studied using the radial wave functions of Woods-Saxon potential. A very good agreement are obtained with experimental data for matter density distributions and available size radii. Besides, the quadrupole moment for 11Li are generated.
A reliability system of the multi-component stress-strength model R(s,k) will be considered in the present paper ,when the stress and strength are independent and non-identically distribution have the Exponentiated Family Distribution(FED) with the unknown shape parameter α and known scale parameter λ equal to two and parameter θ equal to three. Different estimation methods of R(s,k) were introduced corresponding to Maximum likelihood and Shrinkage estimators. Comparisons among the suggested estimators were prepared depending on simulation established on mean squared error (MSE) criteria.
Multivariate Non-Parametric control charts were used to monitoring the data that generated by using the simulation, whether they are within control limits or not. Since that non-parametric methods do not require any assumptions about the distribution of the data. This research aims to apply the multivariate non-parametric quality control methods, which are Multivariate Wilcoxon Signed-Rank ( ) , kernel principal component analysis (KPCA) and k-nearest neighbor ( −
The consensus algorithm is the core mechanism of blockchain and is used to ensure data consistency among blockchain nodes. The PBFT consensus algorithm is widely used in alliance chains because it is resistant to Byzantine errors. However, the present PBFT (Practical Byzantine Fault Tolerance) still has issues with master node selection that is random and complicated communication. The IBFT consensus technique, which is enhanced, is proposed in this study and is based on node trust value and BLS (Boneh-Lynn-Shacham) aggregate signature. In IBFT, multi-level indicators are used to calculate the trust value of each node, and some nodes are selected to take part in network consensus as a result of this calculation. The master node is chosen
... Show MoreA Genetic Algorithm optimization model is used in this study to find the optimum flow values of the Tigris river branches near Ammara city, which their water is to be used for central marshes restoration after mixing in Maissan River. These tributaries are Al-Areed, AlBittera and Al-Majar Al-Kabeer Rivers. The aim of this model is to enhance the water quality in Maissan River, hence provide acceptable water quality for marsh restoration. The model is applied for different water quality change scenarios ,i.e. , 10%,20% increase in EC,TDS and BOD. The model output are the optimum flow values for the three rivers while, the input data are monthly flows(1994-2011),monthly water requirements and water quality parameters (EC, TDS, BOD, DO and
... Show MoreIdentification of complex communities in biological networks is a critical and ongoing challenge since lots of network-related problems correspond to the subgraph isomorphism problem known in the literature as NP-hard. Several optimization algorithms have been dedicated and applied to solve this problem. The main challenge regarding the application of optimization algorithms, specifically to handle large-scale complex networks, is their relatively long execution time. Thus, this paper proposes a parallel extension of the PSO algorithm to detect communities in complex biological networks. The main contribution of this study is summarized in three- fold; Firstly, a modified PSO algorithm with a local search operator is proposed to d
... Show MoreIn this paper, we focus on designing feed forward neural network (FFNN) for solving Mixed Volterra – Fredholm Integral Equations (MVFIEs) of second kind in 2–dimensions. in our method, we present a multi – layers model consisting of a hidden layer which has five hidden units (neurons) and one linear output unit. Transfer function (Log – sigmoid) and training algorithm (Levenberg – Marquardt) are used as a sigmoid activation of each unit. A comparison between the results of numerical experiment and the analytic solution of some examples has been carried out in order to justify the efficiency and the accuracy of our method.
... Show More
Today’s modern medical imaging research faces the challenge of detecting brain tumor through Magnetic Resonance Images (MRI). Normally, to produce images of soft tissue of human body, MRI images are used by experts. It is used for analysis of human organs to replace surgery. For brain tumor detection, image segmentation is required. For this purpose, the brain is partitioned into two distinct regions. This is considered to be one of the most important but difficult part of the process of detecting brain tumor. Hence, it is highly necessary that segmentation of the MRI images must be done accurately before asking the computer to do the exact diagnosis. Earlier, a variety of algorithms were developed for segmentation of MRI images by usin
... Show MoreA new results for fusion reactivity and slowing-down energy distribution functions for controlled thermonuclear fusion reactions of the hydrogen isotopes are achieved to reach promising results in calculating the factors that covered the design and construction of a given fusion system or reactor. They are strongly depending upon their operating fuels, the reaction rate, which in turn, reflects the physical behavior of all other parameters characterization of the system design