The biometric-based keys generation represents the utilization of the extracted features from the human anatomical (physiological) traits like a fingerprint, retina, etc. or behavioral traits like a signature. The retina biometric has inherent robustness, therefore, it is capable of generating random keys with a higher security level compared to the other biometric traits. In this paper, an effective system to generate secure, robust and unique random keys based on retina features has been proposed for cryptographic applications. The retina features are extracted by using the algorithm of glowworm swarm optimization (GSO) that provides promising results through the experiments using the standard retina databases. Additionally, in order to provide high-quality random, unpredictable, and non-regenerated keys, the chaotic map has been used in the proposed system. In the experiments, the NIST statistical analysis which includes ten statistical tests has been employed to check the randomness of the generated binary bits key. The obtained random cryptographic keys are successful in the tests of NIST, in addition to a considerable degree of aperiodicity.
Computer systems and networks are being used in almost every aspect of our daily life; as a result the security threats to computers and networks have also increased significantly. Traditionally, password-based user authentication is widely used to authenticate legitimate user in the current system0T but0T this method has many loop holes such as password sharing, shoulder surfing, brute force attack, dictionary attack, guessing, phishing and many more. The aim of this paper is to enhance the password authentication method by presenting a keystroke dynamics with back propagation neural network as a transparent layer of user authentication. Keystroke Dynamics is one of the famous and inexpensive behavioral biometric technologies, which identi
... Show MoreIn this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.
Image fusion is one of the most important techniques in digital image processing, includes the development of software to make the integration of multiple sets of data for the same location; It is one of the new fields adopted in solve the problems of the digital image, and produce high-quality images contains on more information for the purposes of interpretation, classification, segmentation and compression, etc. In this research, there is a solution of problems faced by different digital images such as multi focus images through a simulation process using the camera to the work of the fuse of various digital images based on previously adopted fusion techniques such as arithmetic techniques (BT, CNT and MLT), statistical techniques (LMM,
... Show MoreA series of liquid crystals comprising a heterocyclics dihydro pyrrole and 1,2,3-triazole rings [VII]-[X] were synthesized by many steps starting from a reaction of 3,3'-dimethyl-[1,1'-biphenyl]- 4,4'-diamine with chloroacetyl chloride in a mixture of solutions DMF and TEA to synthesise the compounds [I], then the compounds [I] reacted with malononitrile in 1,4-dioxane and TEA solutions to produce compounds [II], then the first step is repeated with compound [II] where it reacted with chloroacetyl chloride in mixture of DMF and TEA to give compound [III], this compound reacted with sodium azide in the presence of sodium chloride and DMF as solvent to produce the compound [IV], which reacted with acrylic acid by a 1.3 dipolar reaction in sol
... Show MoreAs a result of recent developments in highway research as well as the increased use of vehicles, there has been a significant interest paid to the most current, effective, and precise Intelligent Transportation System (ITS). In the field of computer vision or digital image processing, the identification of specific objects in an image plays a crucial role in the creation of a comprehensive image. There is a challenge associated with Vehicle License Plate Recognition (VLPR) because of the variation in viewpoints, multiple formats, and non-uniform lighting conditions at the time of acquisition of the image, shape, and color, in addition, the difficulties like poor image resolution, blurry image, poor lighting, and low contrast, these
... Show MoreRecently, Image enhancement techniques can be represented as one of the most significant topics in the field of digital image processing. The basic problem in the enhancement method is how to remove noise or improve digital image details. In the current research a method for digital image de-noising and its detail sharpening/highlighted was proposed. The proposed approach uses fuzzy logic technique to process each pixel inside entire image, and then take the decision if it is noisy or need more processing for highlighting. This issue is performed by examining the degree of association with neighboring elements based on fuzzy algorithm. The proposed de-noising approach was evaluated by some standard images after corrupting them with impulse
... Show MoreSemantic segmentation realization and understanding is a stringent task not just for computer vision but also in the researches of the sciences of earth, semantic segmentation decompose compound architectures in one elements, the most mutual object in a civil outside or inside senses must classified then reinforced with information meaning of all object, it’s a method for labeling and clustering point cloud automatically. Three dimensions natural scenes classification need a point cloud dataset to representation data format as input, many challenge appeared with working of 3d data like: little number, resolution and accurate of three Dimensional dataset . Deep learning now is the po
In this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.