The recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital media. Our investigation rigorously assesses the capabilities of these advanced LLMs in identifying and differentiating manipulated imagery. We explore how these models process visual data, their effectiveness in recognizing subtle alterations, and their potential in safeguarding against misleading representations. The implications of our findings are far-reaching, impacting areas such as security, media integrity, and the trustworthiness of information in digital platforms. Moreover, the study sheds light on the limitations and strengths of current LLMs in handling complex tasks like image verification, thereby contributing valuable insights to the ongoing discourse on AI ethics and digital media reliability.
New series of 4, 4'-((2-(Aryl)-1H-benzo [d] imidazole-1, 3 (2H)-diyl) bis (methylene)) Diphenol (3a-g) was successfully synthesized from cyclization of the reduction product of bis Schiff bases (2) with aryl aldehydes bearing phenolic hydroxyl in the presence of acetic acid. The structure of these compounds was identified from FT-IR, 1H NMR, 13C NMR and EIMs. The Antioxidant capability was screened by DPPH and FRAP assays. Both assays showed antioxidant capability more than BHT as well. Compounds 3b and 3c showed antioxidant capacity slightly less than ascorbic acid. The docking study for theses compound was carried out as III DNA polymerase inhibitor. The results of docking demonstrated that the increase in hinderances around phenolic hydr
... Show MoreA new de-blurring technique was proposed in order to reduced or remove the blur in the images. The proposed filter was designed from the Lagrange interpolation calculation with adjusted by fuzzy rules and supported by wavelet decomposing technique. The proposed Wavelet Lagrange Fuzzy filter gives good results for fully and partially blurring region in images.
An Auto Crop method is used for detection and extraction signature, logo and stamp from the document image. This method improves the performance of security system based on signature, logo and stamp images as well as it is extracted images from the original document image and keeping the content information of cropped images. An Auto Crop method reduces the time cost associated with document contents recognition. This method consists of preprocessing, feature extraction and classification. The HSL color space is used to extract color features from cropped image. The k-Nearest Neighbors (KNN) classifier is used for classification.
In this paper, a simple medical image compression technique is proposed, that based on utilizing the residual of autoregressive model (AR) along with bit-plane slicing (BPS) to exploit the spatial redundancy efficiently. The results showed that the compression performance of the proposed techniques is improved about twice on average compared to the traditional autoregressive, along with preserving the image quality due to considering the significant layers only of high image contribution effects.
Deepfake is a type of artificial intelligence used to create convincing images, audio, and video hoaxes and it concerns celebrities and everyone because they are easy to manufacture. Deepfake are hard to recognize by people and current approaches, especially high-quality ones. As a defense against Deepfake techniques, various methods to detect Deepfake in images have been suggested. Most of them had limitations, like only working with one face in an image. The face has to be facing forward, with both eyes and the mouth open, depending on what part of the face they worked on. Other than that, a few focus on the impact of pre-processing steps on the detection accuracy of the models. This paper introduces a framework design focused on this asp
... Show MoreCuneiform symbols recognition represents a complicated task in pattern recognition and image analysis as a result of problems that related to cuneiform symbols like distortion and unwanted objects that associated with applying Binrizetion process like spots and writing lines. This paper aims to present new proposed algorithms to solve these problems for reaching uniform results about cuneiform symbols recognition that related to (select appropriate Binerized method, erased writing lines and spots) based on statistical Skewness measure, image morphology and distance transform concepts. The experiment results show that our proposed algorithms have excellent result and can be adopted
... Show MoreCombining multi-model images of the same scene that have different focus distances can produce clearer and sharper images with a larger depth of field. Most available image fusion algorithms are superior in results. However, they did not take into account the focus of the image. In this paper a fusion method is proposed to increase the focus of the fused image and to achieve highest quality image using the suggested focusing filter and Dual Tree-Complex Wavelet Transform. The focusing filter consist of a combination of two filters, which are Wiener filter and a sharpening filter. This filter is used before the fusion operation using Dual Tree-Complex Wavelet Transform. The common fusion rules, which are the average-fusion rule and maximu
... Show MoreIn the latest years there has been a profound evolution in computer science and technology, which incorporated several fields. Under this evolution, Content Base Image Retrieval (CBIR) is among the image processing field. There are several image retrieval methods that can easily extract feature as a result of the image retrieval methods’ progresses. To the researchers, finding resourceful image retrieval devices has therefore become an extensive area of concern. Image retrieval technique refers to a system used to search and retrieve images from digital images’ huge database. In this paper, the author focuses on recommendation of a fresh method for retrieving image. For multi presentation of image in Convolutional Neural Network (CNN),
... Show More Today, the use of iris recognition is expanding globally as the most accurate and reliable biometric feature in terms of uniqueness and robustness. The motivation for the reduction or compression of the large databases of iris images becomes an urgent requirement. In general, image compression is the process to remove the insignificant or redundant information from the image details, that implicitly makes efficient use of redundancy embedded within the image itself. In addition, it may exploit human vision or perception limitations to reduce the imperceptible information.
This paper deals with reducing the size of image, namely reducing the number of bits required in representing the