Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the original image. A lossless Hexadata encoding method is then applied to the data, which is based on reducing each set of six data items to a single encoded value. The tested results achieved acceptable saving bytes performance for the 21 iris square images of sizes 256x256 pixels which is about 22.4 KB on average with 0.79 sec decompression average time, with high saving bytes performance for 2 iris non-square images of sizes 640x480/2048x1536 that reached 76KB/2.2 sec, 1630 KB/4.71 sec respectively, Finally, the proposed promising techniques standard lossless JPEG2000 compression techniques with reduction about 1.2 and more in KB saving that implicitly demonstrating the power and efficiency of the suggested lossless biometric techniques.
Multilocus haplotype analysis of candidate variants with genome wide association studies (GWAS) data may provide evidence of association with disease, even when the individual loci themselves do not. Unfortunately, when a large number of candidate variants are investigated, identifying risk haplotypes can be very difficult. To meet the challenge, a number of approaches have been put forward in recent years. However, most of them are not directly linked to the disease-penetrances of haplotypes and thus may not be efficient. To fill this gap, we propose a mixture model-based approach for detecting risk haplotypes. Under the mixture model, haplotypes are clustered directly according to their estimated d
Facial identification is one of the biometrical approaches implemented for identifying any facial image with the use of the basic properties of that face. In this paper we proposes a new improved approach for face detection based on coding eyes by using Open CV's Viola-Jones algorithm which removes the falsely detected faces depending on coding eyes. The Haar training module in Open CV is an implementation of the Viola-Jones framework, the training algorithm takes as input a training group of positive and negative images, and generates strong features in the format of an XML file which is capable of subsequently being utilized for detecting the wanted face and eyes in images, the integral image is used to speed up Haar-like features calc
... Show MoreWithin the framework of big data, energy issues are highly significant. Despite the significance of energy, theoretical studies focusing primarily on the issue of energy within big data analytics in relation to computational intelligent algorithms are scarce. The purpose of this study is to explore the theoretical aspects of energy issues in big data analytics in relation to computational intelligent algorithms since this is critical in exploring the emperica aspects of big data. In this chapter, we present a theoretical study of energy issues related to applications of computational intelligent algorithms in big data analytics. This work highlights that big data analytics using computational intelligent algorithms generates a very high amo
... Show MoreCarbonate reservoirs are an essential source of hydrocarbons worldwide, and their petrophysical properties play a crucial role in hydrocarbon production. Carbonate reservoirs' most critical petrophysical properties are porosity, permeability, and water saturation. A tight reservoir refers to a reservoir with low porosity and permeability, which means it is difficult for fluids to move from one side to another. This study's primary goal is to evaluate reservoir properties and lithological identification of the SADI Formation in the Halfaya oil field. It is considered one of Iraq's most significant oilfields, 35 km south of Amarah. The Sadi formation consists of four units: A, B1, B2, and B3. Sadi A was excluded as it was not filled with h
... Show MoreCompression is the reduction in size of data in order to save space or transmission time. For data transmission, compression can be performed on just the data content or on the entire transmission unit (including header data) depending on a number of factors. In this study, we considered the application of an audio compression method by using text coding where audio compression represented via convert audio file to text file for reducing the time to data transfer by communication channel. Approach: we proposed two coding methods are applied to optimizing the solution by using CFG. Results: we test our application by using 4-bit coding algorithm the results of this method show not satisfy then we proposed a new approach to compress audio fil
... Show MoreA common approach to the color image compression was started by transform
the red, green, and blue or (RGB) color model to a desire color model, then applying
compression techniques, and finally retransform the results into RGB model In this
paper, a new color image compression method based on multilevel block truncation
coding (MBTC) and vector quantization is presented. By exploiting human visual
system response for color, bit allocation process is implemented to distribute the bits
for encoding in more effective away.
To improve the performance efficiency of vector quantization (VQ),
modifications have been implemented. To combines the simple computational and
edge preservation properties of MBTC with high c
Image compression is one of the data compression types applied to digital images in order to reduce their high cost for storage and/or transmission. Image compression algorithms may take the benefit of visual sensitivity and statistical properties of image data to deliver superior results in comparison with generic data compression schemes, which are used for other digital data. In the first approach, the input image is divided into blocks, each of which is 16 x 16, 32 x 32, or 64 x 64 pixels. The blocks are converted first into a string; then, encoded by using a lossless and dictionary-based algorithm known as arithmetic coding. The more occurrence of the pixels values is codded in few bits compare with pixel values of less occurre
... Show MoreColor image compression is a good way to encode digital images by decreasing the number of bits wanted to supply the image. The main objective is to reduce storage space, reduce transportation costs and maintain good quality. In current research work, a simple effective methodology is proposed for the purpose of compressing color art digital images and obtaining a low bit rate by compressing the matrix resulting from the scalar quantization process (reducing the number of bits from 24 to 8 bits) using displacement coding and then compressing the remainder using the Mabel ZF algorithm Welch LZW. The proposed methodology maintains the quality of the reconstructed image. Macroscopic and