This paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one pixel-wide lines. Finally, the Fusion technique was used to merge the results of the Histogram Equalization process with the Skeletonization process to obtain the new high contrast images. The proposed method was tested in different quality images from National Institute of Standard and Technology (NIST) special database 14. The experimental results are very encouraging and the current enhancement method appeared to be effective by improving different quality images.
A new method presented in this work to detect the existence of hidden
data as a secret message in images. This method must be applyied only on images which have the same visible properties (similar in perspective) where the human eyes cannot detect the difference between them.
This method is based on Image Quality Metrics (Structural Contents
Metric), which means the comparison between the original images and stego images, and determines the size ofthe hidden data. We applied the method to four different images, we detect by this method the hidden data and find exactly the same size of the hidden data.
Marking content with descriptive terms that depict the image content is called “tagging,” which is a well-known method to organize content for future navigation, filtering, or searching. Manually tagging video or image content is a time-consuming and expensive process. Accordingly, the tags supplied by humans are often noisy, incomplete, subjective, and inadequate. Automatic Image Tagging can spontaneously assign semantic keywords according to the visual information of images, thereby allowing images to be retrieved, organized, and managed by tag. This paper presents a survey and analysis of the state-of-the-art approaches for the automatic tagging of video and image data. The analysis in this paper covered the publications
... Show MoreThe conjugate coefficient optimal is the very establishment of a variety of conjugate gradient methods. This paper proposes a new class coefficient of conjugate gradient (CG) methods for impulse noise removal, which is based on the quadratic model. Our proposed method ensures descent independent of the accuracy of the line search and it is globally convergent under some conditions, Numerical experiments are also presented for the impulse noise removal in images.
Since the Internet has been more widely used and more people have access to multimedia content, copyright hacking, and piracy have risen. By the use of watermarking techniques, security, asset protection, and authentication have all been made possible. In this paper, a comparison between fragile and robust watermarking techniques has been presented to benefit them in recent studies to increase the level of security of critical media. A new technique has been suggested when adding an embedded value (129) to each pixel of the cover image and representing it as a key to thwart the attacker, increase security, rise imperceptibility, and make the system faster in detecting the tamper from unauthorized users. Using the two watermarking ty
... Show MoreWe propose a new method for detecting the abnormality in cerebral tissues present within Magnetic Resonance Images (MRI). Present classifier is comprised of cerebral tissue extraction, image division into angular and distance span vectors, acquirement of four features for each portion and classification to ascertain the abnormality location. The threshold value and region of interest are discerned using operator input and Otsu algorithm. Novel brain slices image division is introduced via angular and distance span vectors of sizes 24˚ with 15 pixels. Rotation invariance of the angular span vector is determined. An automatic image categorization into normal and abnormal brain tissues is performed using Support Vector Machine (SVM). St
... Show MoreImportance of Arabic language stemming algorithm is not less than that of other languages stemming in Information Retrieval (IR) field. Lots of algorithms for finding the Arabic root are available and they are mainly categorized under two approaches which are light (stem)-based approach and root-based approach. The latter approach is somehow better than the first approach. A new root-based stemmer is proposed and its performance is compared with Khoja stemmer which is the most efficient root-based stemmers. The accuracy ratio of the proposed stemmer is (99.7) with a difference (1.9) with Khoja stemmer.
There are many researches deals with constructing an efficient solutions for real problem having Multi - objective confronted with each others. In this paper we construct a decision for Multi – objectives based on building a mathematical model formulating a unique objective function by combining the confronted objectives functions. Also we are presented some theories concerning this problem. Areal application problem has been presented to show the efficiency of the performance of our model and the method. Finally we obtained some results by randomly generating some problems.