Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the original image. A lossless Hexadata encoding method is then applied to the data, which is based on reducing each set of six data items to a single encoded value. The tested results achieved acceptable saving bytes performance for the 21 iris square images of sizes 256x256 pixels which is about 22.4 KB on average with 0.79 sec decompression average time, with high saving bytes performance for 2 iris non-square images of sizes 640x480/2048x1536 that reached 76KB/2.2 sec, 1630 KB/4.71 sec respectively, Finally, the proposed promising techniques standard lossless JPEG2000 compression techniques with reduction about 1.2 and more in KB saving that implicitly demonstrating the power and efficiency of the suggested lossless biometric techniques.
FG Mohammed, HM Al-Dabbas, Science International, 2018 - Cited by 2
<p>The current work investigated the combustion efficiency of biodiesel engines under diverse ratios of compression (15.5, 16.5, 17.5, and 18.5) and different biodiesel fuels produced from apricot oil, papaya oil, sunflower oil, and tomato seed oil. The combustion process of the biodiesel fuel inside the engine was simulated utilizing ANSYS Fluent v16 (CFD). On AV1 diesel engines (Kirloskar), numerical simulations were conducted at 1500 rpm. The outcomes of the simulation demonstrated that increasing the compression ratio (CR) led to increased peak temperature and pressures in the combustion chamber, as well as elevated levels of CO<sub>2</sub> and NO mass fractions and decreased CO emission values un
... Show MoreStrengthening of the existing structures is an important task that civil engineers continuously face. Compression members, especially columns, being the most important members of any structure, are the most important members to strengthen if the need ever arise. The method of strengthening compression members by direct wrapping by Carbon Fiber Reinforced Polymer (CFRP) was adopted in this research. Since the concrete material is a heterogeneous and complex in behavior, thus, the behavior of the confined compression members subjected to uniaxial stress is investigated by finite element (FE) models created using Abaqus CAE 2017 software. The aim of this research is to study experimentally and numerically, the beha
... Show MoreThe study was carried out by reinforcing the resin matrix material
which was (Epoxy- Ep828) by useing Kevlar fibers and glass fibers type (E-gl ass) both or them in the form of woven roving and poly propylene tlbcrs in the form chopped strand mats. wi th (30%) volume fraction. Some mechan i cal properties of the were prepared composite specimens U ltraviolet radiation were stuied after being subjected to different weathering conditi ons i ncluded. Compression and hardness testing were carried out using Briel! method so as to compare between composite behavior i n the environments previously mentioned .
<
... Show MoreThe main aim of image compression is to reduce the its size to be able for transforming and storage, therefore many methods appeared to compress the image, one of these methods is "Multilayer Perceptron ". Multilayer Perceptron (MLP) method which is artificial neural network based on the Back-Propagation algorithm for compressing the image. In case this algorithm depends upon the number of neurons in the hidden layer only the above mentioned will not be quite enough to reach the desired results, then we have to take into consideration the standards which the compression process depend on to get the best results. We have trained a group of TIFF images with the size of (256*256) in our research, compressed them by using MLP for each
... Show MoreJPEG is most popular image compression and encoding, this technique is widely used in many applications (images, videos and 3D animations). Meanwhile, researchers are very interested to develop this massive technique to compress images at higher compression ratios with keeping image quality as much as possible. For this reason in this paper we introduce a developed JPEG based on fast DCT and removed most of zeros and keeps their positions in a transformed block. Additionally, arithmetic coding applied rather than Huffman coding. The results showed up, the proposed developed JPEG algorithm has better image quality than traditional JPEG techniques.
In this paper, we will focus to one of the recent applications of PU-algebras in the coding theory, namely the construction of codes by soft sets PU-valued functions. First, we shall introduce the notion of soft sets PU-valued functions on PU-algebra and investigate some of its related properties.Moreover, the codes generated by a soft sets PU-valued function are constructed and several examples are given. Furthermore, example with graphs of binary block code constructed from a soft sets PU-valued function is constructed.