In this study, we present a new steganography method depend on quantizing the perceptual color spaces bands. Four perceptual color spaces are used to test the new method which is HSL, HSV, Lab and Luv, where different algorithms to calculate the last two-color spaces are used. The results reveal the validity of this method as a steganoic method and analysis for the effects of quantization and stegano process on the quality of the cover image and the quality of the perceptual color spaces bands are presented.
In this paper, we will study a concepts of sectional intuitionistic fuzzy continuous and prove the schauder fixed point theorem in intuitionistic fuzzy metric space as a generalization of fuzzy metric space and prove a nother version of schauder fixed point theorem in intuitionistic fuzzy metric space as a generalization to the other types of fixed point theorems in intuitionistic fuzzy metric space considered by other researchers, as well as, to the usual intuitionistic fuzzy metric space.
It is shown that if a subset of a topological space (χ, τ) is δ-semi.closed, then it is semi.closed. By use this fact, we introduce the concept regularity of a topological space (χ, τ) via δ-semi.open sets. Many properties and results were investigated and studied. In addition we study some maps that preserve the δ-semi.regularity of spaces.
In this paper, we introduce new classes of sets called g *sD -sets , g *sD −α -sets , g *spreD − sets , g *sbD − -sets and g *sD −β -sets . Also, we study some of their properties and relations among them . Moreover, we use these sets to define and study some associative separation axioms .
In this paper, an algorithm for binary codebook design has been used in vector quantization technique, which is used to improve the acceptability of the absolute moment block truncation coding (AMBTC) method. Vector quantization (VQ) method is used to compress the bitmap (the output proposed from the first method (AMBTC)). In this paper, the binary codebook can be engender for many images depending on randomly chosen to the code vectors from a set of binary images vectors, and this codebook is then used to compress all bitmaps of these images. The chosen of the bitmap of image in order to compress it by using this codebook based on the criterion of the average bitmap replacement error (ABPRE). This paper is suitable to reduce bit rates
... Show MoreIn this paper we present the theoretical foundation of forward error analysis of numerical algorithms under;• Approximations in "built-in" functions.• Rounding errors in arithmetic floating-point operations.• Perturbations of data.The error analysis is based on linearization method. The fundamental tools of the forward error analysis are system of linear absolute and relative a prior and a posteriori error equations and associated condition numbers constituting optimal of possible cumulative round – off errors. The condition numbers enable simple general, quantitative bounds definitions of numerical stability. The theoretical results have been applied a Gaussian elimination, and have proved to be very effective means of both a prior
... Show MoreThe aim of this paper is to look at fibrewise slightly issuances of the more important separation axioms of ordinary topology namely fibrewise said to be fibrewise slightly T0 spaces, fibrewise slightly T1spaces, fibrewise slightly R0 spaces, fibrewise slightly T2 spaces, fibrewise slightly functionally T2 spaces, fibrewise slightly regular spaces, fibrewise slightly completely regular spaces, fibrewise slightly normal spaces. In addition, we announce and confirm many proposals related to these concepts.
The purpose of this paper is to consider fibrewise near versions of the more important separation axioms of ordinary topology namely fibrewise near T0 spaces, fibrewise near T1 spaces, fibrewise near R0 spaces, fibrewise near Hausdorff spaces, fibrewise near functionally Hausdorff spaces, fibrewise near regular spaces, fibrewise near completely regular spaces, fibrewise near normal spaces and fibrewise near functionally normal spaces. Also we give several results concerning it.
The data compression is a very important process in order to reduce the size of a large data to be stored or transported, parametric curves such that Bezier curve is a suitable method to return gradual change and mutability of this data. Ridghelet transform solve the problems in the wavelet transform and it can compress the image well but when it uses with Bezier curve, the equality of compressed image become very well. In this paper, a new compression method is proposed by using Bezier curve with Ridgelet transform on RGB images. The results showed that the proposed method present good performance in both subjective and objective experiments. When the PSNR values equal to (34.2365, 33.4323 and 33.0987), they were increased in the propos
... Show MoreThe main idea of this research is to study fibrewise pairwise soft forms of the more important separation axioms of ordinary bitopology named fibrewise pairwise soft
Most includeding techniques of digital watermark even now working through the direct inclusion in the pixel without taking into account the level of compression (attack) that can go wrong, which makes digital watermark can be discarded easily. In this research, a method was proposed to overcome this problem, which is based on DCT (after image partitioned into non overlapped blocks with size 8×8 pixel), accompanied by a quantization method. The watermark (digital image) is embedded in DCT frequency domain seeking the blocks have highest standard deviation (the checking is only on the AC coefficients) within a predetermined threshold value, then the covered image will compressed (attacked) varying degrees of compression. The suggested met
... Show More