Image compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless compression scheme of first stage that corresponding to second stage. The tested results shown are promising in both two stages, that implicilty enhanced the performance of traditional polynomial model in terms of compression ratio , and preresving image quality.
This article aims to estimate the partially linear model by using two methods, which are the Wavelet and Kernel Smoothers. Simulation experiments are used to study the small sample behavior depending on different functions, sample sizes, and variances. Results explained that the wavelet smoother is the best depending on the mean average squares error criterion for all cases that used.
Algorithms using the second order of B -splines [B (x)] and the third order of B -splines [B,3(x)] are derived to solve 1' , 2nd and 3rd linear Fredholm integro-differential equations (F1DEs). These new procedures have all the useful properties of B -spline function and can be used comparatively greater computational ease and efficiency.The results of these algorithms are compared with the cubic spline function.Two numerical examples are given for conciliated the results of this method.
In this paper we used frequentist and Bayesian approaches for the linear regression model to predict future observations for unemployment rates in Iraq. Parameters are estimated using the ordinary least squares method and for the Bayesian approach using the Markov Chain Monte Carlo (MCMC) method. Calculations are done using the R program. The analysis showed that the linear regression model using the Bayesian approach is better and can be used as an alternative to the frequentist approach. Two criteria, the root mean square error (RMSE) and the median absolute deviation (MAD) were used to compare the performance of the estimates. The results obtained showed that the unemployment rates will continue to increase in the next two decade
... Show MoreLinear programming currently occupies a prominent position in various fields and has wide applications, as its importance lies in being a means of studying the behavior of a large number of systems as well. It is also the simplest and easiest type of models that can be created to address industrial, commercial, military and other dilemmas. Through which to obtain the optimal quantitative value. In this research, we dealt with the post optimality solution, or what is known as sensitivity analysis, using the principle of shadow prices. The scientific solution to any problem is not a complete solution once the optimal solution is reached. Any change in the values of the model constants or what is known as the inputs of the model that will chan
... Show MoreIn this paper, a self-tuning adaptive neural controller strategy for unknown nonlinear system is presented. The system considered is described by an unknown NARMA-L2 model and a feedforward neural network is used to learn the model with two stages. The first stage is learned off-line with two configuration serial-parallel model & parallel model to ensure that model output is equal to actual output of the system & to find the jacobain of the system. Which appears to be of critical importance parameter as it is used for the feedback controller and the second stage is learned on-line to modify the weights of the model in order to control the variable parameters that will occur to the system. A back propagation neural network is appl
... Show MoreRutting in asphalt mixtures is a very common type of distress. It occurs due to the heavy load applied and slow movement of traffic. Rutting needs to be predicted to avoid major deformation to the pavement. A simple linear viscous method is used in this paper to predict the rutting in asphalt mixtures by using a multi-layer linear computer programme (BISAR). The material properties were derived from the Repeated Load Axial Test (RLAT) and represented by a strain-dependent axial viscosity. The axial viscosity was used in an incremental multi-layer linear viscous analysis to calculate the deformation rate during each increment, and therefore the overall development of rutting. The method has been applied for six mixtures and at different tem
... Show MoreIn this article, Convolution Neural Network (CNN) is used to detect damage and no damage images form satellite imagery using different classifiers. These classifiers are well-known models that are used with CNN to detect and classify images using a specific dataset. The dataset used belongs to the Huston hurricane that caused several damages in the nearby areas. In addition, a transfer learning property is used to store the knowledge (weights) and reuse it in the next task. Moreover, each applied classifier is used to detect the images from the dataset after it is split into training, testing and validation. Keras library is used to apply the CNN algorithm with each selected classifier to detect the images. Furthermore, the performa
... Show MoreAs an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified “Ping-Pong”(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based
... Show More