The recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital media. Our investigation rigorously assesses the capabilities of these advanced LLMs in identifying and differentiating manipulated imagery. We explore how these models process visual data, their effectiveness in recognizing subtle alterations, and their potential in safeguarding against misleading representations. The implications of our findings are far-reaching, impacting areas such as security, media integrity, and the trustworthiness of information in digital platforms. Moreover, the study sheds light on the limitations and strengths of current LLMs in handling complex tasks like image verification, thereby contributing valuable insights to the ongoing discourse on AI ethics and digital media reliability.
Artificial intelligence (AI) is entering many fields of life nowadays. One of these fields is biometric authentication. Palm print recognition is considered a fundamental aspect of biometric identification systems due to the inherent stability, reliability, and uniqueness of palm print features, coupled with their non-invasive nature. In this paper, we develop an approach to identify individuals from palm print image recognition using Orange software in which a hybrid of AI methods: Deep Learning (DL) and traditional Machine Learning (ML) methods are used to enhance the overall performance metrics. The system comprises of three stages: pre-processing, feature extraction, and feature classification or matching. The SqueezeNet deep le
... Show MoreHM Al-Dabbas, RA Azeez, AE Ali, IRAQI JOURNAL OF COMPUTERS, COMMUNICATIONS, CONTROL AND SYSTEMS ENGINEERING, 2023
Color image compression is a good way to encode digital images by decreasing the number of bits wanted to supply the image. The main objective is to reduce storage space, reduce transportation costs and maintain good quality. In current research work, a simple effective methodology is proposed for the purpose of compressing color art digital images and obtaining a low bit rate by compressing the matrix resulting from the scalar quantization process (reducing the number of bits from 24 to 8 bits) using displacement coding and then compressing the remainder using the Mabel ZF algorithm Welch LZW. The proposed methodology maintains the quality of the reconstructed image. Macroscopic and
The hero traditionally has such admirable traits as courage, fortitude,
chivalry and patriotism. In the literary works, the hero is the leading
character and the pivot around which all the characters and the events
revolve. The characteristics of the hero usually reflect the cultural values
of his time. Because, in each age, Man's attitudes towards himself and the
world change, different images of the hero emerge.
In Greek Mythology, the hero is frequently favoured by the gods;
therefore, he is himself semi-divine. The Greek hero is of princely birth
and is endowed with good physique, exceptional strength, skill in
athletics and battle, energy and eloquence, like Odysseus who is the hero
of the Odyssey, long
Background: techniques of image analysis have been used extensively to minimize interobserver variation of immunohistochemical scoring, yet; image acquisition procedures are often demanding, expensive and laborious. This study aims to assess the validity of image analysis to predict human observer’s score with a simplified image acquisition technique. Materials and methods: formalin fixed- paraffin embedded tissue sections for ameloblastomas and basal cell carcinomas were immunohistochemically stained with monoclonal antibodies to MMP-2 and MMP-9. The extent of antibody positivity was quantified using Imagej® based application on low power photomicrographs obtained with a conventional camera. Results of the software were employed
... Show MoreA simulation study of using 2D tomography to reconstruction a 3D object is presented. The 2D Radon transform is used to create a 2D projection for each slice of the 3D object at different heights. The 2D back-projection and the Fourier slice theorem methods are used to reconstruction each 2D projection slice of the 3D object. The results showed the ability of the Fourier slice theorem method to reconstruct the general shape of the body with its internal structure, unlike the 2D Radon method, which was able to reconstruct the general shape of the body only because of the blurring artefact, Beside that the Fourier slice theorem could not remove all blurring artefact, therefore, this research, suggested the threshold technique to eliminate the
... Show MoreCaryl Churchill's Top Girls (1982) reveals how women have achieved a point of strength and independence in their battle to face men's oppression throughout history. Churchill has replicated recent transitions in the 1980s and 1990s in works that depict these movements' central concerns and contradictions as they change. Similarly, her theater is a result of many problems and shifts in hegemonic modes of production during this time. This paper traces the achievements of the major character of Top Girls, Marlene, in her way of life and her handling of the struggles of other women around her. Because of this strength, Marlene is compared to the British Prime Minister, Thatcher. Therefore, this paper will shed light on the term of T
... Show MoreThe art of preventing the detection of hidden information messages is the way that steganography work. Several algorithms have been proposed for steganographic techniques. A major portion of these algorithms is specified for image steganography because the image has a high level of redundancy. This paper proposed an image steganography technique using a dynamic threshold produced by the discrete cosine coefficient. After dividing the green and blue channel of the cover image into 1*3-pixel blocks, check if any bits of green channel block less or equal to threshold then start to store the secret bits in blue channel block, and to increase the security not all bits in the chosen block used to store the secret bits. Firstly, store in the cente
... Show More