The recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital media. Our investigation rigorously assesses the capabilities of these advanced LLMs in identifying and differentiating manipulated imagery. We explore how these models process visual data, their effectiveness in recognizing subtle alterations, and their potential in safeguarding against misleading representations. The implications of our findings are far-reaching, impacting areas such as security, media integrity, and the trustworthiness of information in digital platforms. Moreover, the study sheds light on the limitations and strengths of current LLMs in handling complex tasks like image verification, thereby contributing valuable insights to the ongoing discourse on AI ethics and digital media reliability.
Botnet detection develops a challenging problem in numerous fields such as order, cybersecurity, law, finance, healthcare, and so on. The botnet signifies the group of co-operated Internet connected devices controlled by cyber criminals for starting co-ordinated attacks and applying various malicious events. While the botnet is seamlessly dynamic with developing counter-measures projected by both network and host-based detection techniques, the convention techniques are failed to attain sufficient safety to botnet threats. Thus, machine learning approaches are established for detecting and classifying botnets for cybersecurity. This article presents a novel dragonfly algorithm with multi-class support vector machines enabled botnet
... Show MoreIn this paper, the botnet detection problem is defined as a feature selection problem and the genetic algorithm (GA) is used to search for the best significant combination of features from the entire search space of set of features. Furthermore, the Decision Tree (DT) classifier is used as an objective function to direct the ability of the proposed GA to locate the combination of features that can correctly classify the activities into normal traffics and botnet attacks. Two datasets namely the UNSW-NB15 and the Canadian Institute for Cybersecurity Intrusion Detection System 2017 (CICIDS2017), are used as evaluation datasets. The results reveal that the proposed DT-aware GA can effectively find the relevant
... Show MoreIn the task of detecting intrinsic plagiarism, the cases where reference corpus is absent are to be dealt with. This task is entirely based on inconsistencies within a given document. Detection of internal plagiarism has been considered as a classification problem. It can be estimated through taking into consideration self-based information from a given document.
The core contribution of the work proposed in this paper is associated with the document representation. Wherein, the document, also, the disjoint segments generated from it, have been represented as weight vectors demonstrating their main content. Where, for each element in these vectors, its average weight has been considered instead of its frequency.
Th
... Show MoreNowadays, the rapid development of multi-media technology and digital images transmission by the Internet leads the digital images to be exposed to several attacks in the transmission process. Therefore, protection of digital images become increasingly important.
To this end, an image encryption method that adopts Rivest Cipher (RC4) and Deoxyribonucleic Acid (DNA) encoding to increase the secrecy and randomness of the image without affecting its quality is proposed. The Means Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Coefficient Correlation (CC) and histogram analysis are used as an evaluation metrics to evaluate the performance of the proposed method. The results indicate that the proposed method is secure ag
... Show MoreA series of new 4-(((4-(5-(Aryl)-1,3,4-oxadiazol-2-yl)benzyl)oxy)methyl)-2,6-dimethoxy phenol (6a-i) were synthesized from cyclization of 4-(((4-hydroxy-3,5-dimethoxy benzyl)oxy)methyl)benzohydrazide with substituted carboxylic acid in the presences of phosphorusoxy chloride.The resulting compounds were characterized by IR, 1H-NMR, 13C-NMR, and HRMS data. 2,2-Diphenyl-1-picrylhydrazide (DPPH) and ferric reducing antioxidant power (FRAP) assays were used to screen their antioxidant properties. Compounds 6i and 6h exhibited significant antioxidant ability in both assay. Furthermore, type of substituent and their position of the aryl attached 1,3,4-oxadiazole ring at position five are play an important roles in enhancing or declining the antio
... Show MorePolyimides are widely used in high-temperature plastics, adhesives, dielectrics, photoresists, nonlinear optical materials, separation membrane materials, and Langmuir-Blodgett (LB) films. They are commonly regarded as the most heat-resistant polymers. This work involved the synthesis of a new bismaleimide homopolymer and copolymer by performing many steps. The synthesis of compound (1) (bis [4-(amino phenyl) Schiff base] tolidine) via condensation of o-tolidine with two moles of 4-aminoacetophenone. Secondly, compound (1) was combined with maleic anhydride to form compound (2) (4, 4ˉ-bis[4-(N-maleamic acid) phenyl Schiff base] toluidine). Thirdly, a dehydration reaction was carried out affording compound (3) (4,4ˉ-bis [4-(N-maleimidyl
... Show MoreThe advancements in Information and Communication Technology (ICT), within the previous decades, has significantly changed people’s transmit or store their information over the Internet or networks. So, one of the main challenges is to keep these information safe against attacks. Many researchers and institutions realized the importance and benefits of cryptography in achieving the efficiency and effectiveness of various aspects of secure communication.This work adopts a novel technique for secure data cryptosystem based on chaos theory. The proposed algorithm generate 2-Dimensional key matrix having the same dimensions of the original image that includes random numbers obtained from the 1-Dimensional logistic chaotic map for given con
... Show MoreThe effect of using three different interpolation methods (nearest neighbour, linear and non-linear) on a 3D sinogram to restore the missing data due to using angular difference greater than 1° (considered as optimum 3D sinogram) is presented. Two reconstruction methods are adopted in this study, the back-projection method and Fourier slice theorem method, from the results the second reconstruction proven to be a promising reconstruction with the linear interpolation method when the angular difference is less than 20°.
The searching process using a binary codebook of combined Block Truncation Coding (BTC) method and Vector Quantization (VQ), i.e. a full codebook search for each input image vector to find the best matched code word in the codebook, requires a long time. Therefore, in this paper, after designing a small binary codebook, we adopted a new method by rotating each binary code word in this codebook into 900 to 2700 step 900 directions. Then, we systematized each code word depending on its angle to involve four types of binary code books (i.e. Pour when , Flat when , Vertical when, or Zigzag). The proposed scheme was used for decreasing the time of the coding procedure, with very small distortion per block, by designing s
... Show More