Image compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless compression scheme of first stage that corresponding to second stage. The tested results shown are promising in both two stages, that implicilty enhanced the performance of traditional polynomial model in terms of compression ratio , and preresving image quality.
This study delves into the properties of the associated act V over the monoid S of sinshT. It examines the relationship between faithful, finitely generated, and separated acts, as well as their connections to one-to-one and onto operators. Additionally, the correlation between acts over a monoid and modules over a ring is explored. Specifically, it is established that functions as an act over S if and only if functions as module, where T represents a nilpotent operator. Furthermore, it is proved that when T is onto operator and is finitely generated, is guaranteed to be finite-dimensional. Prove that for any bounded operator the following, is acting over S if and only if is a module where T is a nilpotent operator, is a
... Show MoreLinear programming currently occupies a prominent position in various fields and has wide applications, as its importance lies in being a means of studying the behavior of a large number of systems as well. It is also the simplest and easiest type of models that can be created to address industrial, commercial, military and other dilemmas. Through which to obtain the optimal quantitative value. In this research, we dealt with the post optimality solution, or what is known as sensitivity analysis, using the principle of shadow prices. The scientific solution to any problem is not a complete solution once the optimal solution is reached. Any change in the values of the model constants or what is known as the inputs of the model that will chan
... Show MoreSignal denoising is directly related to sample estimation of received signals, either by estimating the equation parameters for the target reflections or the surrounding noise and clutter accompanying the data of interest. Radar signals recorded using analogue or digital devices are not immune to noise. Random or white noise with no coherency is mainly produced in the form of random electrons, and caused by heat, environment, and stray circuitry loses. These factors influence the output signal voltage, thus creating detectable noise. Differential Evolution (DE) is an effectual, competent, and robust optimisation method used to solve different problems in the engineering and scientific domains, such as in signal processing. This paper looks
... Show MoreRutting in asphalt mixtures is a very common type of distress. It occurs due to the heavy load applied and slow movement of traffic. Rutting needs to be predicted to avoid major deformation to the pavement. A simple linear viscous method is used in this paper to predict the rutting in asphalt mixtures by using a multi-layer linear computer programme (BISAR). The material properties were derived from the Repeated Load Axial Test (RLAT) and represented by a strain-dependent axial viscosity. The axial viscosity was used in an incremental multi-layer linear viscous analysis to calculate the deformation rate during each increment, and therefore the overall development of rutting. The method has been applied for six mixtures and at different tem
... Show MoreIn this paper we used frequentist and Bayesian approaches for the linear regression model to predict future observations for unemployment rates in Iraq. Parameters are estimated using the ordinary least squares method and for the Bayesian approach using the Markov Chain Monte Carlo (MCMC) method. Calculations are done using the R program. The analysis showed that the linear regression model using the Bayesian approach is better and can be used as an alternative to the frequentist approach. Two criteria, the root mean square error (RMSE) and the median absolute deviation (MAD) were used to compare the performance of the estimates. The results obtained showed that the unemployment rates will continue to increase in the next two decade
... Show Moreالأثر V بالنسبة إلى sinshT و خواصه قد تم دراسته في هذا البحث حيث تم دراسة علاقة الأثر المخلص والاثر المنتهى التولد والاثر المنفصل وربطها بالمؤثرات المتباينة حيث تم بهنة العلاقات التالية ان الاثر اذا وفقط اذا مقاس في حالة كون المؤثر هو عديم القوة وكذلك في حالة كون المؤثر شامل فان الاثر هو منتهي التولد اي ان الغضاء هو منتهي التولد وايضا تم برهن ان الاثر مخلص لكل مؤثر مقيد وك\لك قد تم التحقق من انه لاي مؤثر مقي
... Show MoreIn this paper the modified trapezoidal rule is presented for solving Volterra linear Integral Equations (V.I.E) of the second kind and we noticed that this procedure is effective in solving the equations. Two examples are given with their comparison tables to answer the validity of the procedure.
This article aims to estimate the partially linear model by using two methods, which are the Wavelet and Kernel Smoothers. Simulation experiments are used to study the small sample behavior depending on different functions, sample sizes, and variances. Results explained that the wavelet smoother is the best depending on the mean average squares error criterion for all cases that used.
In this paper, a self-tuning adaptive neural controller strategy for unknown nonlinear system is presented. The system considered is described by an unknown NARMA-L2 model and a feedforward neural network is used to learn the model with two stages. The first stage is learned off-line with two configuration serial-parallel model & parallel model to ensure that model output is equal to actual output of the system & to find the jacobain of the system. Which appears to be of critical importance parameter as it is used for the feedback controller and the second stage is learned on-line to modify the weights of the model in order to control the variable parameters that will occur to the system. A back propagation neural network is appl
... Show More