Image compression is a serious issue in computer storage and transmission, that simply makes efficient use of redundancy embedded within an image itself; in addition, it may exploit human vision or perception limitations to reduce the imperceivable information Polynomial coding is a modern image compression technique based on modelling concept to remove the spatial redundancy embedded within the image effectively that composed of two parts, the mathematical model and the residual. In this paper, two stages proposed technqies adopted, that starts by utilizing the lossy predictor model along with multiresolution base and thresholding techniques corresponding to first stage. Latter by incorporating the near lossless compression scheme of first stage that corresponding to second stage. The tested results shown are promising in both two stages, that implicilty enhanced the performance of traditional polynomial model in terms of compression ratio , and preresving image quality.
In this paper, we prove some coincidence and common fixed point theorems for a pair of discontinuous weakly compatible self mappings satisfying generalized contractive condition in the setting of Cone-b- metric space under assumption that the Cone which is used is nonnormal. Our results are generalizations of some recent results.
Abstract
Metal cutting processes still represent the largest class of manufacturing operations. Turning is the most commonly employed material removal process. This research focuses on analysis of the thermal field of the oblique machining process. Finite element method (FEM) software DEFORM 3D V10.2 was used together with experimental work carried out using infrared image equipment, which include both hardware and software simulations. The thermal experiments are conducted with AA6063-T6, using different tool obliquity, cutting speeds and feed rates. The results show that the temperature relatively decreased when tool obliquity increases at different cutting speeds and feed rates, also it
... Show MoreTV medium derives its formal shape from the technological development taking place in all scientific fields, which are creatively fused in the image of the television, which consists mainly of various visual levels and formations. But by the new decade of the second millennium, the television medium and mainly (drama) became looking for that paradigm shift in the aesthetic formal innovative fields and the advanced expressive performative fields that enable it to develop in treating what was impossible to visualize previously. In the meantime, presenting what is new and innovative in the field of unprecedented and even the familiar objective and intellectual treatments. Thus the TV medium has sought for work
... Show MoreRecently, the internet has made the users able to transmit the digital media in the easiest manner. In spite of this facility of the internet, this may lead to several threats that are concerned with confidentiality of transferred media contents such as media authentication and integrity verification. For these reasons, data hiding methods and cryptography are used to protect the contents of digital media. In this paper, an enhanced method of image steganography combined with visual cryptography has been proposed. A secret logo (binary image) of size (128x128) is encrypted by applying (2 out 2 share) visual cryptography on it to generate two secret share. During the embedding process, a cover red, green, and blue (RGB) image of size (512
... Show MoreThe present study examines critically the discursive representation of Arab immigrants in selected American news channels. To achieve the aim of this study, twenty news subtitles have been exacted from ABC and NBC channels. The selected news subtitles have been analyzed within van Dijk’s (2000) critical discourse analysis framework. Ten discourse categories have been examined to uncover the image of Arab immigrants in the American news channels. The image of Arab immigrants has been examined in terms of five ideological assumptions including "us vs. them", "ingroup vs. outgroup", "victims vs. agents", "positive self-presentation vs. negative other-presentation", and "threat vs. non-threat". Analysis of data reveals that Arab immig
... Show MoreThis study aims to observe and analysis the propaganda discourse image for Daesh, and know how it marketing the fear due to symbols structure, and discover the straight meanings and hidden inspiration, with the ideology that the image presented.
The study is descriptive and qualitative, and the method is analytic survey used semiotic approach.
The most important results of the study refer to:
- Daesh functioning the image in fear manufacture in all it components: the symbol of savageness, body language, color, clothes uniform and professionally shot.
- The indicative meaning of fear promoted by Daesh based of the manufacturing «Holy», and that mean places non-touchable and non-insulted.
- Daesh used in its propagand
The growth of developments in machine learning, the image processing methods along with availability of the medical imaging data are taking a big increase in the utilization of machine learning strategies in the medical area. The utilization of neural networks, mainly, in recent days, the convolutional neural networks (CNN), have powerful descriptors for computer added diagnosis systems. Even so, there are several issues when work with medical images in which many of medical images possess a low-quality noise-to-signal (NSR) ratio compared to scenes obtained with a digital camera, that generally qualified a confusingly low spatial resolution and tends to make the contrast between different tissues of body are very low and it difficult to co
... Show MoreMany approaches of different complexity already exist to edge detection in
color images. Nevertheless, the question remains of how different are the results
when employing computational costly techniques instead of simple ones. This
paper presents a comparative study on two approaches to color edge detection to
reduce noise in image. The approaches are based on the Sobel operator and the
Laplace operator. Furthermore, an efficient algorithm for implementing the two
operators is presented. The operators have been applied to real images. The results
are presented in this paper. It is shown that the quality of the results increases by
using second derivative operator (Laplace operator). And noise reduced in a good
The deep learning algorithm has recently achieved a lot of success, especially in the field of computer vision. This research aims to describe the classification method applied to the dataset of multiple types of images (Synthetic Aperture Radar (SAR) images and non-SAR images). In such a classification, transfer learning was used followed by fine-tuning methods. Besides, pre-trained architectures were used on the known image database ImageNet. The model VGG16 was indeed used as a feature extractor and a new classifier was trained based on extracted features.The input data mainly focused on the dataset consist of five classes including the SAR images class (houses) and the non-SAR images classes (Cats, Dogs, Horses, and Humans). The Conv
... Show More