Image retrieval is an active research area in image processing, pattern recognition, and
computer vision. In this proposed method, there are two techniques to extract the feature
vector, the first one is applying the transformed algorithm on the whole image and the second
is to divide the image into four blocks and then applying the transform algorithm on each part
of the image. In each technique there are three transform algorithm that have been applied
(DCT, Walsh Transform, and Kekre’s Wavelet Transform) then finding the similarity and
indexing the images, useing the correlation between feature vector of the query image and
images in database. The retrieved method depends on higher indexing number.
Experimental results have shown better results (higher precision and recall) by applying
DCT on the image than the other transform algorithms and the performance improvement if
dividing the image into equal four blocks and applying the transformed algorithm into each
part
|
This work aims to develop a secure lightweight cipher algorithm for constrained devices. A secure communication among constrained devices is a critical issue during the data transmission from the client to the server devices. Lightweight cipher algorithms are defined as a secure solution for constrained devices that require low computational functions and small memory. In contrast, most lightweight algorithms suffer from the trade-off between complexity and speed in order to produce robust cipher algorithm. The PRESENT cipher has been successfully experimented on as a lightweight cryptography algorithm, which transcends other ciphers in terms of its computational processing that required low complexity operations. The mathematical model of
... Show MoreIn recent years, the iris biometric occupies a wide interesting when talking about
biometric based systems, because it is one of the most accurate biometrics to prove
users identities, thus it is providing high security for concerned systems. This
research article is showing up an efficient method to detect the outer boundary of
the iris, using a new form of leading edge detection technique. This technique is
very useful to isolate two regions that have convergent intensity levels in gray scale
images, which represents the main issue of iris isolation, because it is difficult to
find the border that can separate between the lighter gray background (sclera) and
light gray foreground (iris texture). The proposed met
NS-2 is a tool to simulate networks and events that occur per packet sequentially based on time and are widely used in the research field. NS-2 comes with NAM (Network Animator) that produces a visual representation it also supports several simulation protocols. The network can be tested end-to-end. This test includes data transmission, delay, jitter, packet-loss ratio and throughput. The Performance Analysis simulates a virtual network and tests for transport layer protocols at the same time with variable data and analyzes simulation results based on the network simulator NS-2.
Implementation of TSFS (Transposition, Substitution, Folding, and Shifting) algorithm as an encryption algorithm in database security had limitations in character set and the number of keys used. The proposed cryptosystem is based on making some enhancements on the phases of TSFS encryption algorithm by computing the determinant of the keys matrices which affects the implementation of the algorithm phases. These changes showed high security to the database against different types of security attacks by achieving both goals of confusion and diffusion.
Graphene (Gr) decorated with silver nanoparticles (Ag NPs) were used to fabricate a wideband range photodetector. Silicon (Si) and porous silicon (PS) were used as a substrate to deposit Gr /Ag NPs by drop-casting technique. Silver nanoparticles (Ag NPs) were prepared using the chemical method. As well as the dispersion of silver NPs is achieved by a simple chemistry process on the surface of Gr.
The optical, structure and electrical characteristics of AgNPs and Gr decorated with Ag NPs were characterized by ultraviolet-visible spectroscopy (UV-Vis), x-ray diffraction (XRD). The X-ray diffraction (XRD) spectrum of Ag NPs exhibited 2θ values (38.1o, 44.3 o, 64.5 o and 77.7
... Show MoreOptical fiber biomedical sensor based on surface plasmon resonance for measuring and sensing the concentration and the refractive index of sugar in blood serum is designed and implemented during this work. Performance properties such as signal to noise ratio (SNR), sensitivity, resolution and the figure of merit were evaluated for the fabricated sensor. It was found that the sensitivity of the optical fiber-based SPR sensor with 40 nm thick and 10 mm long Au metal film of the exposed sensing region is 7.5µm/RIU, SNR is 0.697, figure of merit is 87.2 and resolution is 0.00026. The sort of optical fiber utilized in this work is plastic optical fiber with a core diameter of 980 µm, a cladding of 20μm, and a numerical aperture of 0.
... Show MoreIt is known that images differ from texts in many aspects, such as high repetition and correlation, local structure, capacitance characteristics and frequency. As a result, traditional encryption methods can not be applied to images. In this paper we present a method for designing a simple and efficient messy system using a difference in the output sequence. To meet the requirements of image encryption, we create a new coding system for linear and nonlinear structures based on the generation of a new key based on chaotic maps.
The design uses a kind of chaotic maps including the Chebyshev 1D map, depending on the parameters, for a good random appearance. The output is a test in several measurements, including the complexity of th
... Show MoreIn this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.