Many cinematic adaptations were produced for the Grimms’ “Little Snow-White” (1812) including Mirror Mirror movie (2012), the contemporary version adapted by Taresm Singh. Singh’s version was able to depict the modern reality of women and went against patriarchy by embracing feminist ideologies of the fourth-wave feminism. Therefore, he challenged the ideologies of the mainstream cinema dominated by the patriarchal élite’s capitalist mode of production that still adhere to the stereotyped patriarchal image of women’s ‘victimization,’ ‘objectification’ and ‘marginalization,’ which did not represent women’s modern reality anymore. This paper, however, is a qualitative study aimed to prove that the feminist ideologies could only be retained after a cultural transformation process from the patriarchal élite culture to the popular culture of mass media after the World War II, which noticeably affected women’s image in the cinema. And thus, this paper is an analytical study of Mirror Mirror that used the analytical textual and production approaches to popular culture along with the Marxist and feminist film theories to unfold the feminist ideologies prevailed in the movie. The study has concluded that the cultural transformation from the patriarchy into the popular culture of mass media led to the emergence of counter-cinema or cinefeminism that encouraged the reversing of the traditional gender roles in cinema. It has also shown that class conflict and economic power caused by the cultural transformation helped in redefining women’s role and place in society. Thereby maintaining the feminist ideologies of the fourth-wave’s ‘women’s empowerment’ positively affected women and girls to reflect their modern reality
In this paper, an adaptive polynomial compression technique is introduced of hard and soft thresholding of transformed residual image that efficiently exploited both the spatial and frequency domains, where the technique starts by applying the polynomial coding in the spatial domain and then followed by the frequency domain of discrete wavelet transform (DWT) that utilized to decompose the residual image of hard and soft thresholding base. The results showed the improvement of adaptive techniques compared to the traditional polynomial coding technique.
Embedding an identifying data into digital media such as video, audio or image is known as digital watermarking. In this paper, a non-blind watermarking algorithm based on Berkeley Wavelet Transform is proposed. Firstly, the embedded image is scrambled by using Arnold transform for higher security, and then the embedding process is applied in transform domain of the host image. The experimental results show that this algorithm is invisible and has good robustness for some common image processing operations.
Improving the performance of visual computing systems is achieved by removing unwanted reflections from a picture captured in front of a glass. Reflection and transmission layers are superimposed in a linear form at the reflected photographs. Decomposing an image into these layers is often a difficult task. Plentiful classical separation methods are available in the literature which either works on a single image or requires multiple images. The major step in reflection removal is the detection of reflection and background edges. Separation of the background and reflection layers is depended on edge categorization results. In this paper a wavelet transform is used as a prior estimation of background edges to sepa
... Show MoreThis paper presents a new and effective procedure to extract shadow regions of high- resolution color images. The method applies this process on modulation the equations of the band space a component of the C1-C2-C3 which represent RGB color, to discrimination the region of shadow, by using the detection equations in two ways, the first by applying Laplace filter, the second by using a Kernel Laplace filter, as well as make comparing the two results for these ways with each other's. The proposed method has been successfully tested on many images Google Earth Ikonos and Quickbird images acquired under different lighting conditions and covering both urban, roads. Experimental results show that this algorithm which is simple and effective t
... Show MoreMerging images is one of the most important technologies in remote sensing applications and geographic information systems. In this study, a simulation process using a camera for fused images by using resizing image for interpolation methods (nearest, bilinear and bicubic). Statistical techniques have been used as an efficient merging technique in the images integration process employing different models namely Local Mean Matching (LMM) and Regression Variable Substitution (RVS), and apply spatial frequency techniques include high pass filter additive method (HPFA). Thus, in the current research, statistical measures have been used to check the quality of the merged images. This has been carried out by calculating the correlation a
... Show MoreWith the rapid development of smart devices, people's lives have become easier, especially for visually disabled or special-needs people. The new achievements in the fields of machine learning and deep learning let people identify and recognise the surrounding environment. In this study, the efficiency and high performance of deep learning architecture are used to build an image classification system in both indoor and outdoor environments. The proposed methodology starts with collecting two datasets (indoor and outdoor) from different separate datasets. In the second step, the collected dataset is split into training, validation, and test sets. The pre-trained GoogleNet and MobileNet-V2 models are trained using the indoor and outdoor se
... Show MoreA new approach presented in this study to determine the optimal edge detection threshold value. This approach is base on extracting small homogenous blocks from unequal mean targets. Then, from these blocks we generate small image with known edges (edges represent the lines between the contacted blocks). So, these simulated edges can be assumed as true edges .The true simulated edges, compared with the detected edges in the small generated image is done by using different thresholding values. The comparison based on computing mean square errors between the simulated edge image and the produced edge image from edge detector methods. The mean square error computed for the total edge image (Er), for edge regio
... Show MoreIn this study, a chaotic method is proposed that generates S-boxes similar to AES S-boxes with the help of a private key belonging to
In this study, dynamic encryption techniques are explored as an image cipher method to generate S-boxes similar to AES S-boxes with the help of a private key belonging to the user and enable images to be encrypted or decrypted using S-boxes. This study consists of two stages: the dynamic generation of the S-box method and the encryption-decryption method. S-boxes should have a non-linear structure, and for this reason, K/DSA (Knutt Durstenfeld Shuffle Algorithm), which is one of the pseudo-random techniques, is used to generate S-boxes dynamically. The biggest advantage of this approach is the produ
... Show MoreDigital images are open to several manipulations and dropped cost of compact cameras and mobile phones due to the robust image editing tools. Image credibility is therefore become doubtful, particularly where photos have power, for instance, news reports and insurance claims in a criminal court. Images forensic methods therefore measure the integrity of image by apply different highly technical methods established in literatures. The present work deals with copy move forgery images of Media Integration and Communication Center Forgery (MICC-F2000) dataset for detecting and revealing the areas that have been tampered portion in the image, the image is sectioned into non overlapping blocks using Simple
... Show MoreThe recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital med
... Show More