This article presents a polynomial-based image compression scheme, which consists of using the color model (YUV) to represent color contents and using two-dimensional polynomial coding (first-order) with variable block size according to correlation between neighbor pixels. The residual part of the polynomial for all bands is analyzed into two parts, most important (big) part, and least important (small) parts. Due to the significant subjective importance of the big group; lossless compression (based on Run-Length spatial coding) is used to represent it. Furthermore, a lossy compression system scheme is utilized to approximately represent the small group; it is based on an error-limited adaptive coding system and using the transform coding scheme (discrete cosine transform or bi-orthogonal transform). Experimentally, the developed system has achieved high compression ratios with acceptable quality for color images. The performance results are comparable to those introduced in recent studies; the accomplishment of the introduced image compression system was analyzed and compared with the performance of the JPEG standard. The results of the developed system show better performance than that of the JPEG standard.
The main objective of this paper is to designed algorithms and implemented in the construction of the main program designated for the determination the tenser product of representation for the special linear group.
The steganography (text in image hiding) methods still considered important issues to the researchers at the present time. The steganography methods were varied in its hiding styles from a simple to complex techniques that are resistant to potential attacks. In current research the attack on the host's secret text problem didn’t considered, but an improved text hiding within the image have highly confidential was proposed and implemented companied with a strong password method, so as to ensure no change will be made in the pixel values of the host image after text hiding. The phrase “highly confidential” denoted to the low suspicious it has been performed may be found in the covered image. The Experimental results show that the cov
... Show MoreThis paper introduced an algorithm for lossless image compression to compress natural and medical images. It is based on utilizing various casual fixed predictors of one or two dimension to get rid of the correlation or spatial redundancy embedded between image pixel values then a recursive polynomial model of a linear base is used.
The experimental results of the proposed compression method are promising in terms of preserving the details and the quality of the reconstructed images as well improving the compression ratio as compared with the extracted results of a traditional linear predicting coding system.
Identity is an influential and flexible concept in social sciences and political studies. The basic sense of identity is looking for uniqueness. In one sense, it is a sign of identification with those we assume they are similar to us or at least in some significant ways they are so. Globalization, migration, modern technologies, media and political conflicts are argued to have a crucial effect on identity representation in terms of the political perspectives specifically in the United States of America. This paper endeavors to investigate how American politicians represent their identities in speeches delivered in different periods of time namely from 2015 to 2018 in terms of the pragmatic paradigm. Three randomly selected speeches by fa
... Show MoreStoring and transferring the images data are raised in recent years due to requisiteness of transmission bandwidth for considerable storage capacity. Data compression method is proposed and applied in an attempt to convert data files into smaller files. The proposed and applied method is based on the Wavelet Difference Reduction (WDR) as considered the most efficient image coding method in recent years. Compression are done for three different Wavelet based Image techniques using WDR process. These techniques are implemented with different types of wavelet codecs. These are Daub2+2,2 Integer Wavelet transform, Daub5/3 integer to integer wavelet transform, and Daub9/7 Wavelet transform with level four. The used mu
... Show MoreThis paper presents a proposed method for (CBIR) from using Discrete Cosine Transform with Kekre Wavelet Transform (DCT/KWT), and Daubechies Wavelet Transform with Kekre Wavelet Transform (D4/KWT) to extract features for Distributed Database system where clients/server as a Star topology, client send the query image and server (which has the database) make all the work and then send the retrieval images to the client. A comparison between these two approaches: first DCT compare with DCT/KWT and second D4 compare with D4/KWT are made. The work experimented over the image database of 200 images of 4 categories and the performance of image retrieval with respect to two similarity measures namely Euclidian distance (ED) and sum of absolute diff
... Show MoreAlthough the Wiener filtering is the optimal tradeoff of inverse filtering and noise smoothing, in the case when the blurring filter is singular, the Wiener filtering actually amplify the noise. This suggests that a denoising step is needed to remove the amplified noise .Wavelet-based denoising scheme provides a natural technique for this purpose .
In this paper a new image restoration scheme is proposed, the scheme contains two separate steps : Fourier-domain inverse filtering and wavelet-domain image denoising. The first stage is Wiener filtering of the input image , the filtered image is inputted to adaptive threshold wavelet
... Show MoreText based-image clustering (TBIC) is an insufficient approach for clustering related web images. It is a challenging task to abstract the visual features of images with the support of textual information in a database. In content-based image clustering (CBIC), image data are clustered on the foundation of specific features like texture, colors, boundaries, shapes. In this paper, an effective CBIC) technique is presented, which uses texture and statistical features of the images. The statistical features or moments of colors (mean, skewness, standard deviation, kurtosis, and variance) are extracted from the images. These features are collected in a one dimension array, and then genetic algorithm (GA) is applied for image clustering.
... Show More