Today in the digital realm, where images constitute the massive resource of the social media base but unfortunately suffer from two issues of size and transmission, compression is the ideal solution. Pixel base techniques are one of the modern spatially optimized modeling techniques of deterministic and probabilistic bases that imply mean, index, and residual. This paper introduces adaptive pixel-based coding techniques for the probabilistic part of a lossy scheme by incorporating the MMSA of the C321 base along with the utilization of the deterministic part losslessly. The tested results achieved higher size reduction performance compared to the traditional pixel-based techniques and the standard JPEG by about 40% and 50%, respectively, with pleasing quality exceeding 45 dB.
In this research we will present the signature as a key to the biometric authentication technique. I shall use moment invariants as a tool to make a decision about any signature which is belonging to the certain person or not. Eighteen voluntaries give 108 signatures as a sample to test the proposed system, six samples belong to each person were taken. Moment invariants are used to build a feature vector stored in this system. Euclidean distance measure used to compute the distance between the specific signatures of persons saved in this system and with new sample acquired to same persons for making decision about the new signature. Each signature is acquired by scanner in jpg format with 300DPI. Matlab used to implement this system.
In this paper, a new high-performance lossy compression technique based on DCT is proposed. The image is partitioned into blocks of a size of NxN (where N is multiple of 2), each block is categorized whether it is high frequency (uncorrelated block) or low frequency (correlated block) according to its spatial details, this done by calculating the energy of block by taking the absolute sum of differential pulse code modulation (DPCM) differences between pixels to determine the level of correlation by using a specified threshold value. The image blocks will be scanned and converted into 1D vectors using horizontal scan order. Then, 1D-DCT is applied for each vector to produce transform coefficients. The transformed coefficients will be qua
... Show MoreA number of compression schemes were put forward to achieve high compression factors with high image quality at a low computational time. In this paper, a combined transform coding scheme is proposed which is based on discrete wavelet (DWT) and discrete cosine (DCT) transforms with an added new enhancement method, which is the sliding run length encoding (SRLE) technique, to further improve compression. The advantages of the wavelet and the discrete cosine transforms were utilized to encode the image. This first step involves transforming the color components of the image from RGB to YUV planes to acquire the advantage of the existing spectral correlation and consequently gaining more compression. DWT is then applied to the Y, U and V col
... Show MoreUncompressed form of the digital images are needed a very large storage capacity amount, as a consequence requires large communication bandwidth for data transmission over the network. Image compression techniques not only minimize the image storage space but also preserve the quality of image. This paper reveal image compression technique which uses distinct image coding scheme based on wavelet transform that combined effective types of compression algorithms for further compression. EZW and SPIHT algorithms are types of significant compression techniques that obtainable for lossy image compression algorithms. The EZW coding is a worthwhile and simple efficient algorithm. SPIHT is an most powerful technique that utilize for image
... Show MoreA robust video-bitrate adaptive scheme at client-aspect plays a significant role in keeping a good quality of video streaming technology experience. Video quality affects the amount of time the video has turned off playing due to the unfilled buffer state. Therefore to maintain a video streaming continuously with smooth bandwidth fluctuation, a video buffer structure based on adapting the video bitrate is considered in this work. Initially, the video buffer structure is formulated as an optimal control-theoretic problem that combines both video bitrate and video buffer feedback signals. While protecting the video buffer occupancy from exceeding the limited operating level can provide continuous video str
... Show MoreImage retrieval is an active research area in image processing, pattern recognition, and
computer vision. In this proposed method, there are two techniques to extract the feature
vector, the first one is applying the transformed algorithm on the whole image and the second
is to divide the image into four blocks and then applying the transform algorithm on each part
of the image. In each technique there are three transform algorithm that have been applied
(DCT, Walsh Transform, and Kekre’s Wavelet Transform) then finding the similarity and
indexing the images, useing the correlation between feature vector of the query image and
images in database. The retrieved method depends on higher indexing number. <
Self-repairing technology based on micro-capsules is an efficient solution for repairing cracked cementitious composites. Self-repairing based on microcapsules begins with the occurrence of cracks and develops by releasing self-repairing factors in the cracks located in concrete. Based on previous comprehensive studies, this paper provides an overview of various repairing factors and investigative methodologies. There has recently been a lack of consensus on the most efficient criteria for assessing self-repairing based on microcapsules and the smart solutions for improving capsule survival ratios during mixing. The most commonly utilized self-repairing efficiency assessment indicators are mechanical resistance and durab
... Show More