The recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital media. Our investigation rigorously assesses the capabilities of these advanced LLMs in identifying and differentiating manipulated imagery. We explore how these models process visual data, their effectiveness in recognizing subtle alterations, and their potential in safeguarding against misleading representations. The implications of our findings are far-reaching, impacting areas such as security, media integrity, and the trustworthiness of information in digital platforms. Moreover, the study sheds light on the limitations and strengths of current LLMs in handling complex tasks like image verification, thereby contributing valuable insights to the ongoing discourse on AI ethics and digital media reliability.
The advancements in Information and Communication Technology (ICT), within the previous decades, has significantly changed people’s transmit or store their information over the Internet or networks. So, one of the main challenges is to keep these information safe against attacks. Many researchers and institutions realized the importance and benefits of cryptography in achieving the efficiency and effectiveness of various aspects of secure communication.This work adopts a novel technique for secure data cryptosystem based on chaos theory. The proposed algorithm generate 2-Dimensional key matrix having the same dimensions of the original image that includes random numbers obtained from the 1-Dimensional logistic chaotic map for given con
... Show MoreStructure type and disorder have become important questions in catalyst design, with the most active catalysts often noted to be “disordered” or “amorphous” in nature. To quantify the effects of disorder and structure type systematically, a test set of manganese(III,IV) oxides was developed and their reactivity as oxidants and catalysts tested against three substrates: methylene blue, hydrogen peroxide, and water. We find that disorder destabilizes the materialsthermodynamically, making them stronger chemical oxidantsbut not necessarily better catalysts. For the disproportionation of H2O2 and the oxidative decomposition of methylene blue, MnOx-mediated direct oxidation competes with catalytically mediated oxidation, making the most
... Show MoreFG Mohammed, HM Al-Dabbas, Iraqi journal of science, 2018 - Cited by 6
The effect of using three different interpolation methods (nearest neighbour, linear and non-linear) on a 3D sinogram to restore the missing data due to using angular difference greater than 1° (considered as optimum 3D sinogram) is presented. Two reconstruction methods are adopted in this study, the back-projection method and Fourier slice theorem method, from the results the second reconstruction proven to be a promising reconstruction with the linear interpolation method when the angular difference is less than 20°.
The searching process using a binary codebook of combined Block Truncation Coding (BTC) method and Vector Quantization (VQ), i.e. a full codebook search for each input image vector to find the best matched code word in the codebook, requires a long time. Therefore, in this paper, after designing a small binary codebook, we adopted a new method by rotating each binary code word in this codebook into 900 to 2700 step 900 directions. Then, we systematized each code word depending on its angle to involve four types of binary code books (i.e. Pour when , Flat when , Vertical when, or Zigzag). The proposed scheme was used for decreasing the time of the coding procedure, with very small distortion per block, by designing s
... Show MoreWireless networks and communications have witnessed tremendous development and growth in recent periods and up until now, as there is a group of diverse networks such as the well-known wireless communication networks and others that are not linked to an infrastructure such as telephone networks, sensors and wireless networks, especially in important applications that work to send and receive important data and information in relatively unsafe environments, cybersecurity technologies pose an important challenge in protecting unsafe networks in terms of their impact on reducing crime. Detecting hacking in electronic networks and penetration testing. Therefore, these environments must be monitored and protected from hacking and malicio
... Show MoreBackground: Legionella pneumophila (L. pneumophila) is gram-negative bacterium, which causes Legionnaires’ disease as well as Pontiac fever. Objective: To determine the frequency of Legionella pneumophila in pneumonic patients, to determine the clinical utility of diagnosing Legionella pneumonia by urinary antigen testing (LPUAT) in terms of sensitivity and specificity, to compares the results obtained from patients by urinary antigen test with q Real Time PCR (RT PCR) using serum samples and to determine the frequency of serogroup 1 and other serogroups of L. pneumophila. Methods: A total of 100 pneumonic patients (community acquired pneumonia) were enrolled in this study during a period between October 2016 to April 2017; 92 sam
... Show MoreFinding communities of connected individuals in complex networks is challenging, yet crucial for understanding different real-world societies and their interactions. Recently attention has turned to discover the dynamics of such communities. However, detecting accurate community structures that evolve over time adds additional challenges. Almost all the state-of-the-art algorithms are designed based on seemingly the same principle while treating the problem as a coupled optimization model to simultaneously identify community structures and their evolution over time. Unlike all these studies, the current work aims to individually consider this three measures, i.e. intra-community score, inter-community score, and evolution of community over
... Show MoreCadmium has been known to be harmful to human healthy , manily Via contaminated drinking water , food supplies , tobacco and industrial pollutant . The aim of this study was to determine the toxicity of new Cadmium (II) complex ( Bis[ 5- ( P- nitrophenyl ) – ? 4 – Phenyl- 1,2,4- triazole -3- dithiocarbamatohydrazide] cadmium (II) Hydra ( 0.5) and compare it with anticancer drug cyclophosphamide ( CP) in female albino mice . This complex causes to several alterations in Enzymatic activity of Glutamate Pyruvate Transaminase (GPT) and Alkaline Phosphatase (ALP ) in three organs after the treatment of mice with different doses of a new cadmium (II) complex ( 0.09 / 0.25ml , 0.18/ 0.5ml and 0.25mg /0.7 ml /30 gm of mous
... Show MoreImage classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class
... Show More