Merging biometrics with cryptography has become more familiar and a great scientific field was born for researchers. Biometrics adds distinctive property to the security systems, due biometrics is unique and individual features for every person. In this study, a new method is presented for ciphering data based on fingerprint features. This research is done by addressing plaintext message based on positions of extracted minutiae from fingerprint into a generated random text file regardless the size of data. The proposed method can be explained in three scenarios. In the first scenario the message was used inside random text directly at positions of minutiae in the second scenario the message was encrypted with a choosen word before ciphering inside random text. In the third scenario the encryption process insures a correct restoration of original message. Experimental results show that the proposed cryptosystem works well and secure due to the huge number of fingerprints may be used by attacker to attempt message extraction where all fingerprints but one will give incorrect results and the message will not represent original plain-text, also this method ensures that any intended tamper or simple damage will be discovered due to failure in extracting proper message even if the correct fingerprint are used.
In recent years, the iris biometric occupies a wide interesting when talking about
biometric based systems, because it is one of the most accurate biometrics to prove
users identities, thus it is providing high security for concerned systems. This
research article is showing up an efficient method to detect the outer boundary of
the iris, using a new form of leading edge detection technique. This technique is
very useful to isolate two regions that have convergent intensity levels in gray scale
images, which represents the main issue of iris isolation, because it is difficult to
find the border that can separate between the lighter gray background (sclera) and
light gray foreground (iris texture). The proposed met
Various document types play an influential role in a lot of our lives activities today; hence preserving their integrity is an important matter. Such documents have various forms, including texts, videos, sounds, and images. The latter types' authentication will be our concern here in this paper. Images can be handled spatially by doing the proper modification directly on their pixel values or spectrally through conducting some adjustments to some of the addressed coefficients. Due to spectral (frequency) domain flexibility in handling data, the domain coefficients are utilized for the watermark embedding purpose. The integer wavelet transform (IWT), which is a wavelet transform based on the lifting scheme,
... Show MoreNS-2 is a tool to simulate networks and events that occur per packet sequentially based on time and are widely used in the research field. NS-2 comes with NAM (Network Animator) that produces a visual representation it also supports several simulation protocols. The network can be tested end-to-end. This test includes data transmission, delay, jitter, packet-loss ratio and throughput. The Performance Analysis simulates a virtual network and tests for transport layer protocols at the same time with variable data and analyzes simulation results based on the network simulator NS-2.
Graphene (Gr) decorated with silver nanoparticles (Ag NPs) were used to fabricate a wideband range photodetector. Silicon (Si) and porous silicon (PS) were used as a substrate to deposit Gr /Ag NPs by drop-casting technique. Silver nanoparticles (Ag NPs) were prepared using the chemical method. As well as the dispersion of silver NPs is achieved by a simple chemistry process on the surface of Gr.
The optical, structure and electrical characteristics of AgNPs and Gr decorated with Ag NPs were characterized by ultraviolet-visible spectroscopy (UV-Vis), x-ray diffraction (XRD). The X-ray diffraction (XRD) spectrum of Ag NPs exhibited 2θ values (38.1o, 44.3 o, 64.5 o and 77.7
... Show MoreRecently, digital communication has become a critical necessity and so the Internet has become the most used medium and most efficient for digital communication. At the same time, data transmitted through the Internet are becoming more vulnerable. Therefore, the issue of maintaining secrecy of data is very important, especially if the data is personal or confidential. Steganography has provided a reliable method for solving such problems. Steganography is an effective technique in secret communication in digital worlds where data sharing and transfer is increasing through the Internet, emails and other ways. The main challenges of steganography methods are the undetectability and the imperceptibility of con
... Show MoreOptical fiber biomedical sensor based on surface plasmon resonance for measuring and sensing the concentration and the refractive index of sugar in blood serum is designed and implemented during this work. Performance properties such as signal to noise ratio (SNR), sensitivity, resolution and the figure of merit were evaluated for the fabricated sensor. It was found that the sensitivity of the optical fiber-based SPR sensor with 40 nm thick and 10 mm long Au metal film of the exposed sensing region is 7.5µm/RIU, SNR is 0.697, figure of merit is 87.2 and resolution is 0.00026. The sort of optical fiber utilized in this work is plastic optical fiber with a core diameter of 980 µm, a cladding of 20μm, and a numerical aperture of 0.
... Show MoreIt is known that images differ from texts in many aspects, such as high repetition and correlation, local structure, capacitance characteristics and frequency. As a result, traditional encryption methods can not be applied to images. In this paper we present a method for designing a simple and efficient messy system using a difference in the output sequence. To meet the requirements of image encryption, we create a new coding system for linear and nonlinear structures based on the generation of a new key based on chaotic maps.
The design uses a kind of chaotic maps including the Chebyshev 1D map, depending on the parameters, for a good random appearance. The output is a test in several measurements, including the complexity of th
... Show MoreIn this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.
Image fusion is one of the most important techniques in digital image processing, includes the development of software to make the integration of multiple sets of data for the same location; It is one of the new fields adopted in solve the problems of the digital image, and produce high-quality images contains on more information for the purposes of interpretation, classification, segmentation and compression, etc. In this research, there is a solution of problems faced by different digital images such as multi focus images through a simulation process using the camera to the work of the fuse of various digital images based on previously adopted fusion techniques such as arithmetic techniques (BT, CNT and MLT), statistical techniques (LMM,
... Show MoreRecognizing facial expressions and emotions is a basic skill that is learned at an early age and it is important for human social interaction. Facial expressions are one of the most powerful natural and immediate means that humans use to express their feelings and intentions. Therefore, automatic emotion recognition based on facial expressions become an interesting area in research, which had been introduced and applied in many areas such as security, safety health, and human machine interface (HMI). Facial expression recognition transition from controlled environmental conditions and their improvement and succession of recent deep learning approaches from different areas made facial expression representation mostly based on u
... Show More