The recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital media. Our investigation rigorously assesses the capabilities of these advanced LLMs in identifying and differentiating manipulated imagery. We explore how these models process visual data, their effectiveness in recognizing subtle alterations, and their potential in safeguarding against misleading representations. The implications of our findings are far-reaching, impacting areas such as security, media integrity, and the trustworthiness of information in digital platforms. Moreover, the study sheds light on the limitations and strengths of current LLMs in handling complex tasks like image verification, thereby contributing valuable insights to the ongoing discourse on AI ethics and digital media reliability.
Background: The main objective was to compare the outcome of single layer interrupted extra-mucosal sutures with that of double layer suturing in the closure of colostomies.
Subjects and Methods: Sixty-seven patients with closure colostomy were assigned in a prospective randomized fashion into either single layer extra-mucosal anastomosis (Group A) or double layer anastomosis (Group B). Primary outcome measures included mean time taken for anastomosis, immediate postoperative complications, and mean duration of hospital stay. Secondary outcome measures assessed the postoperative return of bowel function, and the overall mean cost. Chi-square test and student t-test did the statistical analysis..
Resu
... Show MoreA quantitative description of microstructure governs the characteristics of the material. Various heat and excellent treatments reveal micro-structures when the material is prepared. Depending on the microstructure, mechanical properties like hardness, ductility, strength, toughness, corrosion resistance, etc., also vary. Microstructures are characterized by morphological features like volume fraction of different phases, particle size, etc. Relative volume fractions of the phases must be known to correlate with the mechanical properties. In this work, using image processing techniques, an automated scheme was presented to calculate relative volume fractions of the phases, namely Ferrite, Martensite, and Bainite, present in the
... Show MoreFeatures are the description of the image contents which could be corner, blob or edge. Scale-Invariant Feature Transform (SIFT) extraction and description patent algorithm used widely in computer vision, it is fragmented to four main stages. This paper introduces image feature extraction using SIFT and chooses the most descriptive features among them by blurring image using Gaussian function and implementing Otsu segmentation algorithm on image, then applying Scale-Invariant Feature Transform feature extraction algorithm on segmented portions. On the other hand the SIFT feature extraction algorithm preceded by gray image normalization and binary thresholding as another preprocessing step. SIFT is a strong algorithm and gives more accura
... Show MoreThe searching process using a binary codebook of combined Block Truncation Coding (BTC) method and Vector Quantization (VQ), i.e. a full codebook search for each input image vector to find the best matched code word in the codebook, requires a long time. Therefore, in this paper, after designing a small binary codebook, we adopted a new method by rotating each binary code word in this codebook into 900 to 2700 step 900 directions. Then, we systematized each code word depending on its angle to involve four types of binary code books (i.e. Pour when , Flat when , Vertical when, or Zigzag). The proposed scheme was used for decreasing the time of the coding pro
... Show MoreAlzheimer’s disease (AD) is an age-related progressive and neurodegenerative disorder, which is characterized by loss of memory and cognitive decline. It is the main cause of disability among older people. The rapid increase in the number of people living with AD and other forms of dementia due to the aging population represents a major challenge to health and social care systems worldwide. Degeneration of brain cells due to AD starts many years before the clinical manifestations become clear. Early diagnosis of AD will contribute to the development of effective treatments that could slow, stop, or prevent significant cognitive decline. Consequently, early diagnosis of AD may also be valuable in detecting patients with dementia who have n
... Show MoreBeen Antkhav three isolates of soil classified as follows: Bacillus G3 consists of spores, G12, G27 led Pal NTG treatment to kill part of the cells of the three isolates varying degrees treatment also led to mutations urged resistance to streptomycin and rifampicin and double mutations
Glaucoma is a visual disorder, which is one of the significant driving reason for visual impairment. Glaucoma leads to frustrate the visual information transmission to the brain. Dissimilar to other eye illness such as myopia and cataracts. The impact of glaucoma can’t be cured; The Disc Damage Likelihood Scale (DDLS) can be used to assess the Glaucoma. The proposed methodology suggested simple method to extract Neuroretinal rim (NRM) region then dividing the region into four sectors after that calculate the width for each sector and select the minimum value to use it in DDLS factor. The feature was fed to the SVM classification algorithm, the DDLS successfully classified Glaucoma d
Diabetic retinopathy is an eye disease, because of pressure in eye nerve fiber. It is a major cause of blindness in middle as well as older age groups; therefore it is essential to diagnose it earlier. Some of the challenges are in the diagnosis of the disease is detection edges of the image, may be some important edges are missed outcome the noise around the corners.
Wherefore, in order to reduce these effects in this paper, we proposed a new technique for edge detection using traditional operators in combination with fuzzy logic based on fuzzy inference system. The results show that the proposed fuzzy edge detection technique better than of traditional techniques, where vascular are markedly detected over the original.
The seizure epilepsy is risky because it happens randomly and leads to death in some cases. The standard epileptic seizures monitoring system involves video/EEG (electro-encephalography), which bothers the patient, as EEG electrodes are attached to the patient’s head.
Seriously, helping or alerting the patient before the seizure is one of the issue that attracts the researchers and designers attention. So that there are spectrums of portable seizure detection systems available in markets which are based on non-EEG signal.
The aim of this article is to provide a literature survey for the latest articles that cover many issues in the field of designing portable real-time seizure detection that includes the use of multiple
... Show More