Text based-image clustering (TBIC) is an insufficient approach for clustering related web images. It is a challenging task to abstract the visual features of images with the support of textual information in a database. In content-based image clustering (CBIC), image data are clustered on the foundation of specific features like texture, colors, boundaries, shapes. In this paper, an effective CBIC) technique is presented, which uses texture and statistical features of the images. The statistical features or moments of colors (mean, skewness, standard deviation, kurtosis, and variance) are extracted from the images. These features are collected in a one dimension array, and then genetic algorithm (GA) is applied for image clustering. The extraction of features gave a high distinguishability and helped GA reach the solution more accurately and faster.
The wavelet transform has become a useful computational tool for a variety of signal and image processing applications.
The aim of this paper is to present the comparative study of various wavelet filters. Eleven different wavelet filters (Haar, Mallat, Symlets, Integer, Conflict, Daubechi 1, Daubechi 2, Daubechi 4, Daubechi 7, Daubechi 12 and Daubechi 20) are used to compress seven true color images of 256x256 as a samples. Image quality, parameters such as peak signal-to-noise ratio (PSNR), normalized mean square error have been used to evaluate the performance of wavelet filters.
In our work PSNR is used as a measure of accuracy performanc
... Show MoreAs usage of internet grows in different applications around the world, many techniques were developed to guard an important data against from illegal access and modification from unauthorized users by embedding this data into visual media called host media. Audio hiding in an image is a challenge because of the large size of the audio signal. Some previous methods have been presented to reduce the data of the audio signal before embedding it in the cover image, however, these methods was at the cost of reducing the quality of the audio signal. In this paper, a Slantlet transform (SLT) based method is applied to obtain better performance in terms of audio quality. In addition, the data hiding scheme in the cover color image has been imple
... Show MoreThe computer vision branch of the artificial intelligence field is concerned with
developing algorithms for analyzing image content. Data may be compressed by
reducing the redundancy in the original data, but this makes the data have more
errors. In this paper image compression based on a new method that has been
created for image compression which is called Five Modulus Method (FMM). The
new method consists of converting each pixel value in an (4x4, 8×8,16x16) block
into a multiple of 5 for each of the R, G and B arrays. After that, the new values
could be divided by 5 to get new values which are 6-bit length for each pixel and it
is less in storage space than the original value which is 8-bits.
It is not often easy to identify a certain group of words as a lexical bundle, since the same set of words can be, in different situations, recognized as idiom, a collocation, a lexical phrase or a lexical bundle. That is, there are many cases where the overlap among the four types is plausible. Thus, it is important to extract the most identifiable and distinguishable characteristics with which a certain group of words, under certain conditions, can be recognized as a lexical bundle, and this is the task of this paper.
Image compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless com
... Show MoreThe most popular medium that being used by people on the internet nowadays is video streaming. Nevertheless, streaming a video consumes much of the internet traffics. The massive quantity of internet usage goes for video streaming that disburses nearly 70% of the internet. Some constraints of interactive media might be detached; such as augmented bandwidth usage and lateness. The need for real-time transmission of video streaming while live leads to employing of Fog computing technologies which is an intermediary layer between the cloud and end user. The latter technology has been introduced to alleviate those problems by providing high real-time response and computational resources near to the
... Show MoreDigital image started to including in various fields like, physics science, computer science, engineering science, chemistry science, biology science and medication science, to get from it some important information. But any images acquired by optical or electronic means is likely to be degraded by the sensing environment. In this paper, we will study and derive Iterative Tikhonov-Miller filter and Wiener filter by using criterion function. Then use the filters to restore the degraded image and show the Iterative Tikhonov-Miller filter has better performance when increasing the number of iteration To a certain limit then, the performs will be decrease. The performance of Iterative Tikhonov-Miller filter has better performance for less de
... Show More