This paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one pixel-wide lines. Finally, the Fusion technique was used to merge the results of the Histogram Equalization process with the Skeletonization process to obtain the new high contrast images. The proposed method was tested in different quality images from National Institute of Standard and Technology (NIST) special database 14. The experimental results are very encouraging and the current enhancement method appeared to be effective by improving different quality images.
TV medium derives its formal shape from the technological development taking place in all scientific fields, which are creatively fused in the image of the television, which consists mainly of various visual levels and formations. But by the new decade of the second millennium, the television medium and mainly (drama) became looking for that paradigm shift in the aesthetic formal innovative fields and the advanced expressive performative fields that enable it to develop in treating what was impossible to visualize previously. In the meantime, presenting what is new and innovative in the field of unprecedented and even the familiar objective and intellectual treatments. Thus the TV medium has sought for work
... Show MoreImage compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless com
... Show MoreIn this paper, we describe a new method for image denoising. We analyze properties of the Multiwavelet coefficients of natural images. Also it suggests a method for computing the Multiwavelet transform using the 1st order approximation. This paper describes a simple and effective model for noise removal through suggesting a new technique for retrieving the image by allowing us to estimate it from the noisy image. The proposed algorithm depends on mixing both soft-thresholds with Mean filter and applying concurrently on noisy image by dividing into blocks of equal size (for concurrent processed to increase the performance of the enhancement process and to decease the time that is needed for implementation by applying the proposed algorith
... Show MoreThe present study examines critically the discursive representation of Arab immigrants in selected American news channels. To achieve the aim of this study, twenty news subtitles have been exacted from ABC and NBC channels. The selected news subtitles have been analyzed within van Dijk’s (2000) critical discourse analysis framework. Ten discourse categories have been examined to uncover the image of Arab immigrants in the American news channels. The image of Arab immigrants has been examined in terms of five ideological assumptions including "us vs. them", "ingroup vs. outgroup", "victims vs. agents", "positive self-presentation vs. negative other-presentation", and "threat vs. non-threat". Analysis of data reveals that Arab immig
... Show MoreThe growth of developments in machine learning, the image processing methods along with availability of the medical imaging data are taking a big increase in the utilization of machine learning strategies in the medical area. The utilization of neural networks, mainly, in recent days, the convolutional neural networks (CNN), have powerful descriptors for computer added diagnosis systems. Even so, there are several issues when work with medical images in which many of medical images possess a low-quality noise-to-signal (NSR) ratio compared to scenes obtained with a digital camera, that generally qualified a confusingly low spatial resolution and tends to make the contrast between different tissues of body are very low and it difficult to co
... Show MoreAbstract The goal of current study was to identify the relationship between addiction of self-images (Selfie) and personality disorder of narcissus, and the difference of significance the relationship between addiction self-images (selfie) and personality disorder narcissus at students of Mustansiriya university, addiction self- images (selfie) defined: a photograph that one has taken of oneself, typically one taken with a smartphone or webcam and shared via social media, edit and down lowed to social networking sites, and over time, the replacement of normal life virtual world, which is accompanied by a lack of a sense of time, and the formation of repeated patterns increase the risk of social and personal problems. To achieve the goals
... Show MoreThe deep learning algorithm has recently achieved a lot of success, especially in the field of computer vision. This research aims to describe the classification method applied to the dataset of multiple types of images (Synthetic Aperture Radar (SAR) images and non-SAR images). In such a classification, transfer learning was used followed by fine-tuning methods. Besides, pre-trained architectures were used on the known image database ImageNet. The model VGG16 was indeed used as a feature extractor and a new classifier was trained based on extracted features.The input data mainly focused on the dataset consist of five classes including the SAR images class (houses) and the non-SAR images classes (Cats, Dogs, Horses, and Humans). The Conv
... Show More