Background/Objectives: The purpose of current research aims to a modified image representation framework for Content-Based Image Retrieval (CBIR) through gray scale input image, Zernike Moments (ZMs) properties, Local Binary Pattern (LBP), Y Color Space, Slantlet Transform (SLT), and Discrete Wavelet Transform (DWT). Methods/Statistical analysis: This study surveyed and analysed three standard datasets WANG V1.0, WANG V2.0, and Caltech 101. The features an image of objects in this sets that belong to 101 classes-with approximately 40-800 images for every category. The suggested infrastructure within the study seeks to present a description and operationalization of the CBIR system through automated attribute extraction system premised on CNN infrastructure. Findings: The results acquired through the investigated CBIR system alongside the benchmarked results have clearly indicated that the suggested technique had the best performance with the overall accuracy at 88.29% as opposed to the other sets of data adopted in the experiments. The outstanding results indicate clearly that the suggested method was effective for all the sets of data. Improvements/Applications: As a result of this study, it was found the revealed that the multiple image representation was redundant for extraction accuracy, and the findings from the study indicated that automatically retrieved features are capable and reliable in generating accurate outcomes.
Although the Wiener filtering is the optimal tradeoff of inverse filtering and noise smoothing, in the case when the blurring filter is singular, the Wiener filtering actually amplify the noise. This suggests that a denoising step is needed to remove the amplified noise .Wavelet-based denoising scheme provides a natural technique for this purpose .
In this paper a new image restoration scheme is proposed, the scheme contains two separate steps : Fourier-domain inverse filtering and wavelet-domain image denoising. The first stage is Wiener filtering of the input image , the filtered image is inputted to adaptive threshold wavelet
... Show MoreThe recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital med
... Show MoreOne of the most difficult issues in the history of communication technology is the transmission of secure images. On the internet, photos are used and shared by millions of individuals for both private and business reasons. Utilizing encryption methods to change the original image into an unintelligible or scrambled version is one way to achieve safe image transfer over the network. Cryptographic approaches based on chaotic logistic theory provide several new and promising options for developing secure Image encryption methods. The main aim of this paper is to build a secure system for encrypting gray and color images. The proposed system consists of two stages, the first stage is the encryption process, in which the keys are genera
... Show MoreThe concealment of data has emerged as an area of deep and wide interest in research that endeavours to conceal data in a covert and stealth manner, to avoid detection through the embedment of the secret data into cover images that appear inconspicuous. These cover images may be in the format of images or videos used for concealment of the messages, yet still retaining the quality visually. Over the past ten years, there have been numerous researches on varying steganographic methods related to images, that emphasised on payload and the quality of the image. Nevertheless, a compromise exists between the two indicators and to mediate a more favourable reconciliation for this duo is a daunting and problematic task. Additionally, the current
... Show MoreInformation hiding strategies have recently gained popularity in a variety of fields. Digital audio, video, and images are increasingly being labelled with distinct but undetectable marks that may contain a hidden copyright notice or serial number, or even directly help to prevent unauthorized duplication. This approach is extended to medical images by hiding secret information in them using the structure of a different file format. The hidden information may be related to the patient. In this paper, a method for hiding secret information in DICOM images is proposed based on Discrete Wavelet Transform (DWT). Firstly. segmented all slices of a 3D-image into a specific block size and collecting the host image depend on a generated key
... Show MoreSteganography is a mean of hiding information within a more obvious form of
communication. It exploits the use of host data to hide a piece of information in such a way
that it is imperceptible to human observer. The major goals of effective Steganography are
High Embedding Capacity, Imperceptibility and Robustness. This paper introduces a scheme
for hiding secret images that could be as much as 25% of the host image data. The proposed
algorithm uses orthogonal discrete cosine transform for host image. A scaling factor (a) in
frequency domain controls the quality of the stego images. Experimented results of secret
image recovery after applying JPEG coding to the stego-images are included.
A new technique for embedding image data into another BMP image data is presented. The image data to be embedded is referred to as signature image, while the image into which the signature image is embedded is referred as host image. The host and the signature images are first partitioned into 8x8 blocks, discrete cosine transformed “DCT”, only significant coefficients are retained, the retained coefficients then inserted in the transformed block in a forward and backward zigzag scan direction. The result then inversely transformed and presented as a BMP image file. The peak signal-to-noise ratio (PSNR) is exploited to evaluate the objective visual quality of the host image compared with the original image.
Median filter is adopted to match the noise statistics of the degradation seeking good quality smoothing images. Two methods are suggested in this paper(Pentagonal-Hexagonal mask and Scan Window Mask), the study involved modified median filter for improving noise suppression, the modification is considered toward more reliable results. Modification median filter (Pentagonal-Hexagonal mask) was found gave better results (qualitatively and quantitatively ) than classical median filters and another suggested method (Scan Window Mask), but this will be on the account of the time required. But sometimes when the noise is line type the cross 3x3 filter preferred to another one Pentagonal-Hexagonal with few variation. Scan Window Mask gave bett
... Show More