Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eyes' observation of the different colors and features of images. We propose a multi-layer hybrid system for deep learning using the unsupervised CAE architecture and using the color clustering of the K-mean algorithm to compress images and determine their size and color intensity. The system is implemented using Kodak and Challenge on Learned Image Compression (CLIC) dataset for deep learning. Experimental results show that our proposed method is superior to the traditional compression methods of the autoencoder, and the proposed work has better performance in terms of performance speed and quality measures Peak Signal To Noise Ratio (PSNR) and Structural Similarity Index (SSIM) where the results achieved better performance and high efficiency With high compression bit rates and low Mean Squared Error (MSE) rate the results recorded the highest compression ratios that ranged between (0.7117 to 0.8707) for the Kodak dataset and (0.7191 to 0.9930) for CLIC dataset. The system achieved high accuracy and quality in comparison to the error coefficient, which was recorded (0.0126 to reach 0.0003) below, and this system is onsidered the most quality and accurate compared to the methods of deep learning compared to the deep learning methods of the autoencoder
We propose a new method for detecting the abnormality in cerebral tissues present within Magnetic Resonance Images (MRI). Present classifier is comprised of cerebral tissue extraction, image division into angular and distance span vectors, acquirement of four features for each portion and classification to ascertain the abnormality location. The threshold value and region of interest are discerned using operator input and Otsu algorithm. Novel brain slices image division is introduced via angular and distance span vectors of sizes 24˚ with 15 pixels. Rotation invariance of the angular span vector is determined. An automatic image categorization into normal and abnormal brain tissues is performed using Support Vector Machine (SVM). St
... Show MoreEstimation the unknown parameters of a two-dimensional sinusoidal signal model is an important and a difficult problem , The importance of this model in modeling Symmetric gray- scale texture image . In this paper, we propose employment Deferential Evaluation algorithm and the use of Sequential approach to estimate the unknown frequencies and amplitudes of the 2-D sinusoidal components when the signal is affected by noise. Numerical simulation are performed for different sample size, and various level of standard deviation to observe the performance of this method in estimate the parameters of 2-D sinusoidal signal model , This model was used for modeling the Symmetric gray scale texture image and estimating by using
... Show MoreSurvival analysis is widely applied to data that described by the length of time until the occurrence of an event under interest such as death or other important events. The purpose of this paper is to use the dynamic methodology which provides a flexible method, especially in the analysis of discrete survival time, to estimate the effect of covariate variables through time in the survival analysis on dialysis patients with kidney failure until death occurs. Where the estimations process is completely based on the Bayes approach by using two estimation methods: the maximum A Posterior (MAP) involved with Iteratively Weighted Kalman Filter Smoothing (IWKFS) and in combination with the Expectation Maximization (EM) algorithm. While the other
... Show MoreThe widespread of internet allover the world, in addition to the increasing of the huge number of users that they exchanged important information over it highlights the need for a new methods to protect these important information from intruders' corruption or modification. This paper suggests a new method that ensures that the texts of a given document cannot be modified by the intruders. This method mainly consists of mixture of three steps. The first step which barrows some concepts of "Quran" security system to detect some type of change(s) occur in a given text. Where a key of each paragraph in the text is extracted from a group of letters in that paragraph which occur as multiply of a given prime number. This step cannot detect the ch
... Show MoreThe haplotype association analysis has been proposed to capture the collective behavior of sets of variants by testing the association of each set instead of individual variants with the disease.Such an analysis typically involves a list of unphased multiple-locus genotypes with potentially sparse frequencies in cases and controls.It starts with inferring haplotypes from genotypes followed by a haplotype co-classification and marginal screening for disease-associated haplotypes.Unfortunately,phasing uncertainty may have a strong effects on the haplotype co-classification and therefore on the accuracy of predicting risk haplotypes.Here,to address the issue,we propose an alternative approach:In Stage 1,we select potential risk genotypes inste
... Show MoreIn this paper, point estimation for parameter ? of Maxwell-Boltzmann distribution has been investigated by using simulation technique, to estimate the parameter by two sections methods; the first section includes Non-Bayesian estimation methods, such as (Maximum Likelihood estimator method, and Moment estimator method), while the second section includes standard Bayesian estimation method, using two different priors (Inverse Chi-Square and Jeffrey) such as (standard Bayes estimator, and Bayes estimator based on Jeffrey's prior). Comparisons among these methods were made by employing mean square error measure. Simulation technique for different sample sizes has been used to compare between these methods.
The results show the inability to apply the Taylor rule within inflation and GDP Gaps because the monetary behave is elated from the Iraqi economy.
When applying the Taylor rule to exchange rate with the inflation and the output gap, the results do not match the nominal price announced by the central thing, which proves the lack of commitment by the Central Bank by using the Taylor rule, whether short-run interest rate or exchange rate (Nominal Anchor), so it did not stay to the Iraqi Central Bank only using the principle of Taylor with the expected inflation rate below the level of output (Macro activity) for the separation of monetary behavior from the real one o
... Show MoreSustainable development (SD) is an improvement that meets present needs but jeopardizes the ability of new populations to do the same. It is vital to acquaint EFL students with the terminology and idiomatic expressions of this discipline. Nowadays, sustainable development and the environment have been prioritized in every aspect of life. Since culture and the teaching of Foreign language English cannot be separated, the English language becomes the mean of communication in health, economics, education, and politics. Thus, integrating sustainable development goals within language learning and teaching is very important. This descriptive quantitative study aims to investigate the perception of EFL pre-service teachers of sustainable develo
... Show MoreThis paper considers and proposes new estimators that depend on the sample and on prior information in the case that they either are equally or are not equally important in the model. The prior information is described as linear stochastic restrictions. We study the properties and the performances of these estimators compared to other common estimators using the mean squared error as a criterion for the goodness of fit. A numerical example and a simulation study are proposed to explain the performance of the estimators.
Some maps of the chaotic firefly algorithm were selected to select variables for data on blood diseases and blood vessels obtained from Nasiriyah General Hospital where the data were tested and tracking the distribution of Gamma and it was concluded that a Chebyshevmap method is more efficient than a Sinusoidal map method through mean square error criterion.