Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the original image. A lossless Hexadata encoding method is then applied to the data, which is based on reducing each set of six data items to a single encoded value. The tested results achieved acceptable saving bytes performance for the 21 iris square images of sizes 256x256 pixels which is about 22.4 KB on average with 0.79 sec decompression average time, with high saving bytes performance for 2 iris non-square images of sizes 640x480/2048x1536 that reached 76KB/2.2 sec, 1630 KB/4.71 sec respectively, Finally, the proposed promising techniques standard lossless JPEG2000 compression techniques with reduction about 1.2 and more in KB saving that implicitly demonstrating the power and efficiency of the suggested lossless biometric techniques.
In accordance with epidemic COVID-19, the elevated infection rates, disinfectant overuse and antibiotic misuse what led to immune suppression in most of the population in addition to genotypic and phenotypic alterations in the microorganisms, so a great need to reevaluate the genetic determinants that responsible for bacterial community (biofilm) has been raised. A total of 250 clinical specimens were obtained from patients in Baghdad hospitals and streaked on Mannitol salt agar medium. The results revealed that 156 isolates appeared as round yellow colonies, indicating that they were mostly identified as Staphylococcus aureus from 250 specimens. The antibiotic resistance pattern of the isolates for methicillin 37.17% (n=58), Amoxic
... Show MoreThe problem of the high peak to average ratio (PAPR) in OFDM signals is investigated with a brief presentation of the various methods used to reduce the PAPR with special attention to the clipping method. An alternative approach of clipping is presented, where the clipping is performed right after the IFFT stage unlike the conventional clipping that is performed in the power amplifier stage, which causes undesirable out of signal band spectral growth. In the proposed method, there is clipping of samples not clipping of wave, therefore, the spectral distortion is avoided. Coding is required to correct the errors introduced by the clipping and the overall system is tested for two types of modulations, the QPSK as a constant amplitude modul
... Show MoreIn this paper different channel coding and interleaving schemes in DS/CDMA system over multipath fading channel were used. Two types of serially concatenated coding were presented. The first one composed of Reed-Solomon as outer code, convolutional code as inner code and the interleaver between the outer and inner codes and the second consist of convolutional code as outer code, interleaved in the middle and differential code as an inner code. Bit error rate performance of different schemes in multipath fading channel was analyzed and compared. Rack receiver was used in DS/CDMA receiver to combine multipath components in order to enhance the signal to noise ratio at the receiver.
In the presence of deep submicron noise, providing reliable and energy‐efficient network on‐chip operation is becoming a challenging objective. In this study, the authors propose a hybrid automatic repeat request (HARQ)‐based coding scheme that simultaneously reduces the crosstalk induced bus delay and provides multi‐bit error protection while achieving high‐energy savings. This is achieved by calculating two‐dimensional parities and duplicating all the bits, which provide single error correction and six errors detection. The error correction reduces the performance degradation caused by retransmissions, which when combined with voltage swing reduction, due to its high error detection, high‐energy savings are achieved. The res
... Show MoreError control schemes became a necessity in network-on-chip (NoC) to improve reliability as the on-chip interconnect errors increase with the continuous shrinking of geometry. Accordingly, many researchers are trying to present multi-bit error correction coding schemes that perform a high error correction capability with the simplest design possible to minimize area and power consumption. A recent work, Multi-bit Error Correcting Coding with Reduced Link Bandwidth (MECCRLB), showed a huge reduction in area and power consumption compared to a well-known scheme, namely, Hamming product code (HPC) with Type-II HARQ. Moreover, the authors showed that the proposed scheme can correct 11 random errors which is considered a high
... Show MoreThis work represents study the rock facies and flow unit classification for the Mishrif carbonate reservoir in Buzurgan oil Field, which located n the south eastern Iraq, using wire line logs, core samples and petrophysical data (log porosity and core permeability). Hydraulic flow units were identified using flow zone indicator approach and assessed within each rock type to reach better understanding of the controlling role of pore types and geometry in reservoir quality variations. Additionally, distribution of sedimentary facies and Rock Fabric Number along with porosity and permeability was analyzed in three wells (BU-1, BU-2, and BU-3). The interactive Petrophysics - IP software is used to assess the rock fabric number, flow zon
... Show MoreMultiple eliminations (de-multiple) are one of seismic processing steps to remove their effects and delineate the correct primary refractors. Using normal move out to flatten primaries is the way to eliminate multiples through transforming these data to frequency-wavenumber domain. The flatten primaries are aligned with zero axis of the frequency-wavenumber domain and any other reflection types (multiples and random noise) are distributed elsewhere. Dip-filter is applied to pass the aligned data and reject others will separate primaries from multiple after transforming the data back from frequency-wavenumber domain to time-distance domain. For that, a suggested name for this technique as normal move out- frequency-wavenumber domain
... Show MoreAgriculture is one of the major sources of livelihood for the Iraqi people as one-third of Iraq population resides in rural areas and depends upon agriculture for their livelihoods. This study aims to estimate the impact of temperature variability on crops productivity across the agro-climatic zones in Salah Al-Din governorate using climate satellite-based data for the period 2000 to 2018. The average annual air temperature based on satellite data was downloaded from the GLDAS Model NOAH025_M v2.1, and interpolates using Kriging interpolation/spherical model. Thirteen strategic crops were selected which is Courgette, garlic, Onion, Sweet Pepper, Watermelon, Melon, Cucumber, Tomato, Potato, Eggplant, Wheat, Barley
... Show MoreThe objective of the study is to demonstrate the predictive ability is better between the logistic regression model and Linear Discriminant function using the original data first and then the Home vehicles to reduce the dimensions of the variables for data and socio-economic survey of the family to the province of Baghdad in 2012 and included a sample of 615 observation with 13 variable, 12 of them is an explanatory variable and the depended variable is number of workers and the unemployed.
Was conducted to compare the two methods above and it became clear by comparing the logistic regression model best of a Linear Discriminant function written
... Show More