The present study examines critically the discursive representation of Arab immigrants in selected American news channels. To achieve the aim of this study, twenty news subtitles have been exacted from ABC and NBC channels. The selected news subtitles have been analyzed within van Dijk’s (2000) critical discourse analysis framework. Ten discourse categories have been examined to uncover the image of Arab immigrants in the American news channels. The image of Arab immigrants has been examined in terms of five ideological assumptions including "us vs. them", "ingroup vs. outgroup", "victims vs. agents", "positive self-presentation vs. negative other-presentation", and "threat vs. non-threat". Analysis of data reveals that Arab immigrants are portrayed negatively in the American channels under investigation and the televised discourse is greatly loaded with racist ideologies and perceptions towards Arab immigrants reflecting the standpoint of their owners. Finally, a number of conclusions and implications are presented.
In the image processing’s field and computer vision it’s important to represent the image by its information. Image information comes from the image’s features that extracted from it using feature detection/extraction techniques and features description. Features in computer vision define informative data. For human eye its perfect to extract information from raw image, but computer cannot recognize image information. This is why various feature extraction techniques have been presented and progressed rapidly. This paper presents a general overview of the feature extraction categories for image.
The steganography (text in image hiding) methods still considered important issues to the researchers at the present time. The steganography methods were varied in its hiding styles from a simple to complex techniques that are resistant to potential attacks. In current research the attack on the host's secret text problem didn’t considered, but an improved text hiding within the image have highly confidential was proposed and implemented companied with a strong password method, so as to ensure no change will be made in the pixel values of the host image after text hiding. The phrase “highly confidential” denoted to the low suspicious it has been performed may be found in the covered image. The Experimental results show that the covere
... Show MoreKidney tumors are of different types having different characteristics and also remain challenging in the field of biomedicine. It becomes very important to detect the tumor and classify it at the early stage so that appropriate treatment can be planned. Accurate estimation of kidney tumor volume is essential for clinical diagnoses and therapeutic decisions related to renal diseases. The main objective of this research is to use the Computer-Aided Diagnosis (CAD) algorithms to help the early detection of kidney tumors that addresses the challenges of accurate kidney tumor volume estimation caused by extensive variations in kidney shape, size and orientation across subjects.
In this paper, have tried to implement an automated segmentati
Although the Wiener filtering is the optimal tradeoff of inverse filtering and noise smoothing, in the case when the blurring filter is singular, the Wiener filtering actually amplify the noise. This suggests that a denoising step is needed to remove the amplified noise .Wavelet-based denoising scheme provides a natural technique for this purpose .
In this paper a new image restoration scheme is proposed, the scheme contains two separate steps : Fourier-domain inverse filtering and wavelet-domain image denoising. The first stage is Wiener filtering of the input image , the filtered image is inputted to adaptive threshold wavelet
... Show MoreText based-image clustering (TBIC) is an insufficient approach for clustering related web images. It is a challenging task to abstract the visual features of images with the support of textual information in a database. In content-based image clustering (CBIC), image data are clustered on the foundation of specific features like texture, colors, boundaries, shapes. In this paper, an effective CBIC) technique is presented, which uses texture and statistical features of the images. The statistical features or moments of colors (mean, skewness, standard deviation, kurtosis, and variance) are extracted from the images. These features are collected in a one dimension array, and then genetic algorithm (GA) is applied for image clustering.
... Show MoreThe recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital med
... Show More