Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eyes' observation of the different colors and features of images. We propose a multi-layer hybrid system for deep learning using the unsupervised CAE architecture and using the color clustering of the K-mean algorithm to compress images and determine their size and color intensity. The system is implemented using Kodak and Challenge on Learned Image Compression (CLIC) dataset for deep learning. Experimental results show that our proposed method is superior to the traditional compression methods of the autoencoder, and the proposed work has better performance in terms of performance speed and quality measures Peak Signal To Noise Ratio (PSNR) and Structural Similarity Index (SSIM) where the results achieved better performance and high efficiency With high compression bit rates and low Mean Squared Error (MSE) rate the results recorded the highest compression ratios that ranged between (0.7117 to 0.8707) for the Kodak dataset and (0.7191 to 0.9930) for CLIC dataset. The system achieved high accuracy and quality in comparison to the error coefficient, which was recorded (0.0126 to reach 0.0003) below, and this system is onsidered the most quality and accurate compared to the methods of deep learning compared to the deep learning methods of the autoencoder
Abstract: Stars whose initial masses are between (0.89 - 8.0) M☉ go through an Asymptotic Giant Branch (AGB) phase at the end of their life. Which have been evolved from the main sequence phase through Asymptotic Giant Branch (AGB). The calculations were done by adopted Synthetic Model showed the following results: 1- Mass loss on the AGB phase consists of two phases for period (P <500) days and for (P>500) days; 2- the mass loss rate exponentially increases with the pulsation periods; 3- The expansion velocity VAGB for our stars are calculated according to the three assumptions; 4- the terminal velocity depends on several factors likes metallicity and luminosity. The calculations indicated that a super wind phase (S.W) developed on the A
... Show MoreThis paper describes a new finishing process using magnetic abrasives were newly made to finish effectively brass plate that is very difficult to be polished by the conventional machining processes. Taguchi experimental design method was adopted for evaluating the effect of the process parameters on the improvement of the surface roughness and hardness by the magnetic abrasive polishing. The process parameters are: the applied current to the inductor, the working gap between the workpiece and the inductor, the rotational speed and the volume of powder. The analysis of variance(ANOVA) was analyzed using statistical software to identify the optimal conditions for better surface roughness and hardness. Regressions models based on statistical m
... Show MoreObjective: the aim of this study is to invest age and determine the effect of using (2) packing
technique (conventional and new tension technique) on hardness of (2) types of heat cure acrylic
resin which are (Ivoclar and Qual dental type).
Methodology : this study was intended the using of two types of heat cure acrylic (IVoclar and
Qual dental type) which are used in construction of complete denture which packed in two different
packing technique (conventional and new tension technique) and accomplished by using a total of
(40) specimens in diameter of ( 2mm thickness, 2 cm length and 1 cm width) . This specimens were
sectioned and subdivide into (4) group each (10) specimens for one group , then signed as (A, Al B
Variable selection is an essential and necessary task in the statistical modeling field. Several studies have triedto develop and standardize the process of variable selection, but it isdifficultto do so. The first question a researcher needs to ask himself/herself what are the most significant variables that should be used to describe a given dataset’s response. In thispaper, a new method for variable selection using Gibbs sampler techniqueshas beendeveloped.First, the model is defined, and the posterior distributions for all the parameters are derived.The new variable selection methodis tested usingfour simulation datasets. The new approachiscompared with some existingtechniques: Ordinary Least Squared (OLS), Least Absolute Shrinkage
... Show MoreIn this work, analytical study for simulating a Fabry-Perot bistable etalon (F-P cavity) filled with a dispersive optimized nonlinear optical material (Kerr type) such as semiconductors Indium Antimonide (InSb). Because of a trade off between the etalon finesse values and driving terms, an optimization procedures have been done on the InSb etalon/CO laser parameters, using critical switching irradiance (Ic) via simulation systems of optimization procedures of optical cavity. in order to achieve the minimum switching power and faster switching time, the optimization parameters of the finesse values and driving terms on optical bistability and switching dynamics must be studied.
... Show MoreDatabase is characterized as an arrangement of data that is sorted out and disseminated in a way that allows the client to get to the data being put away in a simple and more helpful way. However, in the era of big-data the traditional methods of data analytics may not be able to manage and process the large amount of data. In order to develop an efficient way of handling big-data, this work studies the use of Map-Reduce technique to handle big-data distributed on the cloud. This approach was evaluated using Hadoop server and applied on EEG Big-data as a case study. The proposed approach showed clear enhancement for managing and processing the EEG Big-data with average of 50% reduction on response time. The obtained results provide EEG r
... Show MoreThe catalytic activity of faujasite type NaY catalysts prepared from local clay (kaolin) with different Si/Al ratio was studied using cumene cracking as a model for catalytic cracking process in the temperature range of 450-525° C, weight hourly space velocity (WHSV) of 5-20 h1, particle size ≤75μm and atmospheric pressure. The catalytic activity was investigated using experimental laboratory plant scale of fluidized bed reactor.
It was found that the cumene conversion increases with increasing temperature and decreasing WHSV. At 525° C and WHSV 5 h-1, the conversion was 42.36 and 35.43 mol% for catalyst with 3.54 Si/Al ratio and Catalyst with 5.75 Si/Al ratio, respectively, while at 450° C and at the same WHSV, the conversion w
An adaptive nonlinear neural controller to reduce the nonlinear flutter in 2-D wing is proposed in the paper. The nonlinearities in the system come from the quasi steady aerodynamic model and torsional spring in pitch direction. Time domain simulations are used to examine the dynamic aero elastic instabilities of the system (e.g. the onset of flutter and limit cycle oscillation, LCO). The structure of the controller consists of two models :the modified Elman neural network (MENN) and the feed forward multi-layer Perceptron (MLP). The MENN model is trained with off-line and on-line stages to guarantee that the outputs of the model accurately represent the plunge and pitch motion of the wing and this neural model acts as the identifier. Th
... Show MoreThe current study examined the impact of using PowerPoint presentation on EFL student’s attendance, achievement and engagement. To achieve the aim of this study, three null hypotheses have been posed as follows: There is no statistically significant difference between the mean score of the experimental group attendance and that of the control one; there is no statistically significant difference between the mean score of the experimental group achievement and that of the control one, and there is no statistically significant difference between the mean score of the experimental group engagement and that of the control one. To verify a hypothesis, a sample of sixty students is chosen randomly from the third year, department of English,
... Show MoreRoller-Compacted Concrete (RCC) is a zero-slump concrete, with no forms, no reinforcing steel, no finishing and is wet enough to support compaction by vibratory rollers. Because the effectiveness of curing on properties and durability, the primary scope of this research is to study the effect of various curing methods (air curing, emulsified asphalt(flan coat) curing, 7 days water curing and permanent water curing) and different porcelanite (local material used as an Internal Curing agent) replacement percentages (volumetric replacement) of fine aggregate on some properties of RCC and to explore the possibility of introducing more practical RCC for road pavement with minimum requirement of curing. Cubes specimens were sawed from the slab
... Show More