Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the original image. A lossless Hexadata encoding method is then applied to the data, which is based on reducing each set of six data items to a single encoded value. The tested results achieved acceptable saving bytes performance for the 21 iris square images of sizes 256x256 pixels which is about 22.4 KB on average with 0.79 sec decompression average time, with high saving bytes performance for 2 iris non-square images of sizes 640x480/2048x1536 that reached 76KB/2.2 sec, 1630 KB/4.71 sec respectively, Finally, the proposed promising techniques standard lossless JPEG2000 compression techniques with reduction about 1.2 and more in KB saving that implicitly demonstrating the power and efficiency of the suggested lossless biometric techniques.
The undetected error probability is an important measure to assess the communication reliability provided by any error coding scheme. Two error coding schemes namely, Joint crosstalk avoidance and Triple Error Correction (JTEC) and JTEC with Simultaneous Quadruple Error Detection (JTEC-SQED), provide both crosstalk reduction and multi-bit error correction/detection features. The available undetected error probability model yields an upper bound value which does not give accurate estimation on the reliability provided. This paper presents an improved mathematical model to estimate the undetected error probability of these two joint coding schemes. According to the decoding algorithm the errors are classified into patterns and their decoding
... Show MoreThe main aim of this paper is to introduce the relationship between the topic of coding theory and the projective plane of order three. The maximum value of size of code over finite field of order three and an incidence matrix with the parameters, (length of code), (minimum distance of code) and (error-correcting of code ) have been constructed. Some examples and theorems have been given.
In this research, optical communication coding systems are designed and constructed by utilizing Frequency Shift Code (FSC) technique. Calculations of the system quality represented by signal to noise ratio (S/N), Bit Error Rate (BER),and Power budget are done. In FSC system, the data of Nonreturn- to–zero (NRZ ) with bit rate at 190 kb/s was entered into FSC encoder circuit in transmitter unit. This data modulates the laser source HFCT-5205 with wavelength at 1310 nm by Intensity Modulation (IM) method, then this data is transferred through Single Mode (SM) optical fiber. The recovery of the NRZ is achieved using decoder circuit in receiver unit. The calculations of BER and S/N for FSC system a
... Show MoreAction films employ many artistic and literary elements that contribute greatly to building the general meaning of the film and push the wheel of the film forward. The element of mystery and suspense is used as two basic elements in action films. The cinematic language in action films depends on global coding, which is not models as it might be. It is based on logic, rather as units that aspire to morphology and not their homogeneity as the physical sense, but as the logical harmony of interpretive authority and enlightenment and in action films as a field of communication and a field in its origin in which the signifier contrasts with the perceptions of the meaning and in it takes a certain number of units preventing each other and thro
... Show MoreIn this paper three techniques for image compression are implemented. The proposed techniques consist of three dimension (3-D) two level discrete wavelet transform (DWT), 3-D two level discrete multi-wavelet transform (DMWT) and 3-D two level hybrid (wavelet-multiwavelet transform) technique. Daubechies and Haar are used in discrete wavelet transform and Critically Sampled preprocessing is used in discrete multi-wavelet transform. The aim is to maintain to increase the compression ratio (CR) with respect to increase the level of the transformation in case of 3-D transformation, so, the compression ratio is measured for each level. To get a good compression, the image data properties, were measured, such as, image entropy (He), percent r
... Show MoreWe explore the transform coefficients of fractal and exploit new method to improve the compression capabilities of these schemes. In most of the standard encoder/ decoder systems the quantization/ de-quantization managed as a separate step, here we introduce new way (method) to work (managed) simultaneously. Additional compression is achieved by this method with high image quality as you will see later.
The wavelet transform has become a useful computational tool for a variety of signal and image processing applications.
The aim of this paper is to present the comparative study of various wavelet filters. Eleven different wavelet filters (Haar, Mallat, Symlets, Integer, Conflict, Daubechi 1, Daubechi 2, Daubechi 4, Daubechi 7, Daubechi 12 and Daubechi 20) are used to compress seven true color images of 256x256 as a samples. Image quality, parameters such as peak signal-to-noise ratio (PSNR), normalized mean square error have been used to evaluate the performance of wavelet filters.
In our work PSNR is used as a measure of accuracy performanc
... Show MoreIn this paper three techniques for image compression are implemented. The proposed techniques consist of three dimension (3-D) two level discrete wavelet transform (DWT), 3-D two level discrete multi-wavelet transform (DMWT) and 3-D two level hybrid (wavelet-multiwavelet transform) technique. Daubechies and Haar are used in discrete wavelet transform and Critically Sampled preprocessing is used in discrete multi-wavelet transform. The aim is to maintain to increase the compression ratio (CR) with respect to increase the level of the transformation in case of 3-D transformation, so, the compression ratio is measured for each level. To get a good compression, the image data properties, were measured, such as, image entropy (He), percent root-
... Show MoreThe computer vision branch of the artificial intelligence field is concerned with developing algorithms for analyzing video image content. Extracting edge information, which is the essential process in most pictorial pattern recognition problems. A new method of edge detection technique has been introduces in this research, for detecting boundaries.
Selection of typical lossy techniques for encoding edge video images are also discussed in this research. The concentration is devoted to discuss the Block-Truncation coding technique and Discrete Cosine Transform (DCT) coding technique. In order to reduce the volume of pictorial data which one may need to store or transmit,
... Show More