The recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital media. Our investigation rigorously assesses the capabilities of these advanced LLMs in identifying and differentiating manipulated imagery. We explore how these models process visual data, their effectiveness in recognizing subtle alterations, and their potential in safeguarding against misleading representations. The implications of our findings are far-reaching, impacting areas such as security, media integrity, and the trustworthiness of information in digital platforms. Moreover, the study sheds light on the limitations and strengths of current LLMs in handling complex tasks like image verification, thereby contributing valuable insights to the ongoing discourse on AI ethics and digital media reliability.
Evolutionary algorithms (EAs), as global search methods, are proved to be more robust than their counterpart local heuristics for detecting protein complexes in protein-protein interaction (PPI) networks. Typically, the source of robustness of these EAs comes from their components and parameters. These components are solution representation, selection, crossover, and mutation. Unfortunately, almost all EA based complex detection methods suggested in the literature were designed with only canonical or traditional components. Further, topological structure of the protein network is the main information that is used in the design of almost all such components. The main contribution of this paper is to formulate a more robust E
... Show MoreIn this paper, RBF-based multistage auto-encoders are used to detect IDS attacks. RBF has numerous applications in various actual life settings. The planned technique involves a two-part multistage auto-encoder and RBF. The multistage auto-encoder is applied to select top and sensitive features from input data. The selected features from the multistage auto-encoder is wired as input to the RBF and the RBF is trained to categorize the input data into two labels: attack or no attack. The experiment was realized using MATLAB2018 on a dataset comprising 175,341 case, each of which involves 42 features and is authenticated using 82,332 case. The developed approach here has been applied for the first time, to the knowledge of the authors, to dete
... Show Morethe study including isolation and identification of candida spp causing UTIs from patintes coming to al-yarmouk hospital
In this paper a hybrid system was designed for securing transformed or stored text messages(Arabic and english) by embedding the message in a colored image as a cover file depending on LSB (Least Significant Bit) algorithm in a dispersed way and employing Hill data encryption algorithm for encrypt message before being hidden, A key of 3x3 was used for encryption with inverse for decryption, The system scores a good result for PSNR rate ( 75-86) that differentiates according to length of message and image resolution
In this paper a hybrid system was designed for securing transformed or stored text messages(Arabic and english) by embedding the message in a colored image as a cover file depending on LSB (Least Significant Bit) algorithm in a dispersed way and employing Hill data encryption algorithm for encrypt message before being hidden, A key of 3x3 was used for encryption with inverse for decryption, The system scores a good result for PSNR rate ( 75-86) that differentiates according to length of message and image resolution.
Pan sharpening (fusion image) is the procedure of merging suitable information from two or more images into a single image. The image fusion techniques allow the combination of different information sources to improve the quality of image and increase its utility for a particular application. In this research, six pan-sharpening method have been implemented between the panchromatic and multispectral images, these methods include Ehlers, color normalize, Gram-Schmidt, local mean and variance matching, Daubechies of rank two and Symlets of rank four wavelet transform. Two images captured by two different sensors such as landsat-8 and world view-2 have been adopted to achieve the fusion purpose. Different fidelity metric like MS
... Show MoreSome degree of noise is always present in any electronic device that
transmits or receives a signal . For televisions, this signal i has been to s the
broadcast data transmitted over cable-or received at the antenna; for digital
cameras, the signal is the light which hits the camera sensor. At any case, noise
is unavoidable. In this paper, an electronic noise has been generate on
TV-satellite images by using variable resistors connected to the transmitting cable
. The contrast of edges has been determined. This method has been applied by
capturing images from TV-satellite images (Al-arabiya channel) channel with
different resistors. The results show that when increasing resistance always
produced higher noise f
Cyber security is a term utilized for describing a collection of technologies, procedures, and practices that try protecting an online environment of a user or an organization. For medical images among most important and delicate data kinds in computer systems, the medical reasons require that all patient data, including images, be encrypted before being transferred over computer networks by healthcare companies. This paper presents a new direction of the encryption method research by encrypting the image based on the domain of the feature extracted to generate a key for the encryption process. The encryption process is started by applying edges detection. After dividing the bits of the edge image into (3×3) windows, the diffusions
... Show MoreThe research shows that the visual image plays an important role when Farzdaq in the issue of aesthetic perception, it enables him to feel a sense of artistic and mental perception to raise astonishment and admiration through his ability to link the optics through the suggestive image to carry us to a new vision imagined full of visual images.