Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the original image. A lossless Hexadata encoding method is then applied to the data, which is based on reducing each set of six data items to a single encoded value. The tested results achieved acceptable saving bytes performance for the 21 iris square images of sizes 256x256 pixels which is about 22.4 KB on average with 0.79 sec decompression average time, with high saving bytes performance for 2 iris non-square images of sizes 640x480/2048x1536 that reached 76KB/2.2 sec, 1630 KB/4.71 sec respectively, Finally, the proposed promising techniques standard lossless JPEG2000 compression techniques with reduction about 1.2 and more in KB saving that implicitly demonstrating the power and efficiency of the suggested lossless biometric techniques.
The demand for electronic -passport photo ( frontal facial) images has grown rapidly. It now extends to Electronic Government (E-Gov) applications such as social benefits driver's license, e-passport, and e-visa . With the COVID 19 (coronavirus disease ), facial (formal) images are becoming more widely used and spreading quickly, and are being used to verify an individual's identity, but unfortunately that comes with insignificant details of constant background which leads to huge byte consumption that affects storage space and transmission, where the optimal solution that aims to curtail data size using compression techniques that based on exploiting image redundancy(s) efficiently.
Compressing the speech reduces the data storage requirements, leading to reducing the time of transmitting the digitized speech over long-haul links like internet. To obtain best performance in speech compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry.The MCT bases functions are derived from GHM bases function using 2D linear convolution .The fast computation algorithm methods introduced here added desirable features to the current transform. We further assess the performance of the MCT in speech compression application. This paper discusses the effect of using DWT and MCT (one and two dimension) on speech compression. DWT and MCT performances in terms of comp
... Show MoreIn this paper,we estimate the parameters and related probability functions, survival function, cumulative distribution function , hazard function(failure rate) and failure (death) probability function(pdf) for two parameters Birnbaum-Saunders distribution which is fitting the complete data for the patients of lymph glands cancer. Estimating the parameters (shape and scale) using (maximum likelihood , regression quantile and shrinkage) methods and then compute the value of mentioned related probability functions depending on sample from real data which describe the duration of survivor for patients who suffer from the lymph glands cancer based on diagnosis of disease or the inter of patients in a hospital for perio
... Show MoreThe development that solar energy will have in the next years needs a reliable estimation of available solar energy resources. Several empirical models have been developed to calculate global solar radiation using various parameters such as extraterrestrial radiation, sunshine hours, albedo, maximum temperature, mean temperature, soil temperature, relative humidity, cloudiness, evaporation, total perceptible water, number of rainy days, and altitude and latitude. In present work i) First part has been calculated solar radiation from the daily values of the hours of sun duration using Angstrom model over the Iraq for at July 2017. The second part has been mapping the distribution of so
Unconfined compressive strength (UCS) of rock is the most critical geomechanical property widely used as input parameters for designing fractures, analyzing wellbore stability, drilling programming and carrying out various petroleum engineering projects. The USC regulates rock deformation by measuring its strength and load-bearing capacity. The determination of UCS in the laboratory is a time-consuming and costly process. The current study aims to develop empirical equations to predict UCS using regression analysis by JMP software for the Khasib Formation in the Buzurgan oil fields, in southeastern Iraq using well-log data. The proposed equation accuracy was tested using the coefficient of determination (R²), the average absolute
... Show MoreIn this paper, a procedure to establish the different performance measures in terms of crisp value is proposed for two classes of arrivals and multiple channel queueing models, where both arrival and service rate are fuzzy numbers. The main idea is to convert the arrival rates and service rates under fuzzy queues into crisp queues by using graded mean integration approach, which can be represented as median rule number. Hence, we apply the crisp values obtained to establish the performance measure of conventional multiple queueing models. This procedure has shown its effectiveness when incorporated with many types of membership functions in solving queuing problems. Two numerical illustrations are presented to determine the validity of the
... Show MoreIn this research, we find the Bayesian formulas and the estimation of Bayesian expectation for product system of Atlas Company. The units of the system have been examined by helping the technical staff at the company and by providing a real data the company which manufacturer the system. This real data include the failed units for each drawn sample, which represents the total number of the manufacturer units by the company system. We calculate the range for each estimator by using the Maximum Likelihood estimator. We obtain that the expectation-Bayesian estimation is better than the Bayesian estimator of the different partially samples which were drawn from the product system after it checked by the
... Show MoreIn the presence of deep submicron noise, providing reliable and energy‐efficient network on‐chip operation is becoming a challenging objective. In this study, the authors propose a hybrid automatic repeat request (HARQ)‐based coding scheme that simultaneously reduces the crosstalk induced bus delay and provides multi‐bit error protection while achieving high‐energy savings. This is achieved by calculating two‐dimensional parities and duplicating all the bits, which provide single error correction and six errors detection. The error correction reduces the performance degradation caused by retransmissions, which when combined with voltage swing reduction, due to its high error detection, high‐energy savings are achieved. The res
... Show MoreError control schemes became a necessity in network-on-chip (NoC) to improve reliability as the on-chip interconnect errors increase with the continuous shrinking of geometry. Accordingly, many researchers are trying to present multi-bit error correction coding schemes that perform a high error correction capability with the simplest design possible to minimize area and power consumption. A recent work, Multi-bit Error Correcting Coding with Reduced Link Bandwidth (MECCRLB), showed a huge reduction in area and power consumption compared to a well-known scheme, namely, Hamming product code (HPC) with Type-II HARQ. Moreover, the authors showed that the proposed scheme can correct 11 random errors which is considered a high
... Show More