Image compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless compression scheme of first stage that corresponding to second stage. The tested results shown are promising in both two stages, that implicilty enhanced the performance of traditional polynomial model in terms of compression ratio , and preresving image quality.
The objective that the researcher seeks to achieve through this research is to clarify the relationship between strategic management accounting techniques and the reliability of financial statements, and to measure the impact of these techniques as an independent variable with its three dimensions, which are: activities-based cost, target cost, and benchmarking on the reliability of financial statements as a dependent variable. To achieve this objective, the researcher did the following: First: Determine the research problem through the following question: Do strategic management accounting techniques affect the reliability of financial statements in industrial companies listed on the Palestine Exchange? Second: Making the analytical des
... Show MoreImage compression is very important in reducing the costs of data storage transmission in relatively slow channels. Wavelet transform has received significant attention because their multiresolution decomposition that allows efficient image analysis. This paper attempts to give an understanding of the wavelet transform using two more popular examples for wavelet transform, Haar and Daubechies techniques, and make compression between their effects on the image compression.
This paper presents the application of a framework of fast and efficient compressive sampling based on the concept of random sampling of sparse Audio signal. It provides four important features. (i) It is universal with a variety of sparse signals. (ii) The number of measurements required for exact reconstruction is nearly optimal and much less then the sampling frequency and below the Nyquist frequency. (iii) It has very low complexity and fast computation. (iv) It is developed on the provable mathematical model from which we are able to quantify trade-offs among streaming capability, computation/memory requirement and quality of reconstruction of the audio signal. Compressed sensing CS is an attractive compression scheme due to its uni
... Show MoreThin films of CuPc of various thicknesses (150,300 and 450) nm have been deposited using pulsed laser deposition technique at room temperature. The study showed that the spectra of the optical absorption of the thin films of the CuPc are two bands of absorption one in the visible region at about 635 nm, referred to as Q-band, and the second in ultra-violet region where B-band is located at 330 nm. CuPc thin films were found to have direct band gap with values around (1.81 and 3.14 (eV respectively. The vibrational studies were carried out using Fourier transform infrared spectroscopy (FT-IR). Finally, From open and closed aperture Z-scan data non-linear absorption coefficient and non-linear refractive index have been calculated res
... Show MoreIn this paper, the concept of normalized duality mapping has introduced in real convex modular spaces. Then, some of its properties have shown which allow dealing with results related to the concept of uniformly smooth convex real modular spaces. For multivalued mappings defined on these spaces, the convergence of a two-step type iterative sequence to a fixed point is proved
In this paper we use non-polynomial spline functions to develop numerical methods to approximate the solution of 2nd kind Volterra integral equations. Numerical examples are presented to illustrate the applications of these method, and to compare the computed results with other known methods.
This research sought to present a concept of cross-sectional data models, A crucial double data to take the impact of the change in time and obtained from the measured phenomenon of repeated observations in different time periods, Where the models of the panel data were defined by different types of fixed , random and mixed, and Comparing them by studying and analyzing the mathematical relationship between the influence of time with a set of basic variables Which are the main axes on which the research is based and is represented by the monthly revenue of the working individual and the profits it generates, which represents the variable response And its relationship to a set of explanatory variables represented by the
... Show MoreThe purpose of this paper, is to study different iterations algorithms types three_steps called, new iteration,