Steganography involves concealing information by embedding data within cover media and it can be categorized into two main domains: spatial and frequency. This paper presents two distinct methods. The first is operating in the spatial domain which utilizes the least significant bits (LSBs) to conceal a secret message. The second method is the functioning in the frequency domain which hides the secret message within the LSBs of the middle-frequency band of the discrete cosine transform (DCT) coefficients. These methods enhance obfuscation by utilizing two layers of randomness: random pixel embedding and random bit embedding within each pixel. Unlike other available methods that embed data in sequential order with a fixed amount. These methods embed the data in a random location with a random amount, further enhancing the level of obfuscation. A pseudo-random binary key that is generated through a nonlinear combination of eight Linear Feedback Shift Registers (LFSRs) controls this randomness. The experimentation involves various 512x512 cover images. The first method achieves an average PSNR of 43.5292 with a payload capacity of up to 16% of the cover image. In contrast, the second method yields an average PSNR of 38.4092 with a payload capacity of up to 8%. The performance analysis demonstrates that the LSB-based method can conceal more data with less visibility, however, it is vulnerable to simple image manipulation. On the other hand, the DCT-based method offers lower capacity with increased visibility, but it is more robust.
Disease diagnosis with computer-aided methods has been extensively studied and applied in diagnosing and monitoring of several chronic diseases. Early detection and risk assessment of breast diseases based on clinical data is helpful for doctors to make early diagnosis and monitor the disease progression. The purpose of this study is to exploit the Convolutional Neural Network (CNN) in discriminating breast MRI scans into pathological and healthy. In this study, a fully automated and efficient deep features extraction algorithm that exploits the spatial information obtained from both T2W-TSE and STIR MRI sequences to discriminate between pathological and healthy breast MRI scans. The breast MRI scans are preprocessed prior to the feature
... Show MoreIn this paper, three main generators are discussed: Linear generator, Geffe generator and Bruer generator. The Geffe and Bruer generators are improved and then calculate the Autocorrelation postulate of randomness test for each generator and compare the obtained result. These properties can be measured deterministically and then compared to statistical expectations using a chi-square test.
In this paper, membrane-based computing image segmentation, both region-based and edge-based, is proposed for medical images that involve two types of neighborhood relations between pixels. These neighborhood relations—namely, 4-adjacency and 8-adjacency of a membrane computing approach—construct a family of tissue-like P systems for segmenting actual 2D medical images in a constant number of steps; the two types of adjacency were compared using different hardware platforms. The process involves the generation of membrane-based segmentation rules for 2D medical images. The rules are written in the P-Lingua format and appended to the input image for visualization. The findings show that the neighborhood relations between pixels o
... Show MoreMany image processing and machine learning applications require sufficient image feature selection and representation. This can be achieved by imitating human ability to process visual information. One such ability is that human eyes are much more sensitive to changes in the intensity (luminance) than the color information. In this paper, we present how to exploit luminance information, organized in a pyramid structure, to transfer properties between two images. Two applications are presented to demonstrate the results of using luminance channel in the similarity metric of two images. These are image generation; where a target image is to be generated from a source one, and image colorization; where color information is to be browsed from o
... Show MoreThe recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital med
... Show MoreIn this paper, a simple medical image compression technique is proposed, that based on utilizing the residual of autoregressive model (AR) along with bit-plane slicing (BPS) to exploit the spatial redundancy efficiently. The results showed that the compression performance of the proposed techniques is improved about twice on average compared to the traditional autoregressive, along with preserving the image quality due to considering the significant layers only of high image contribution effects.
This paper studies a novel technique based on the use of two effective methods like modified Laplace- variational method (MLVIM) and a new Variational method (MVIM)to solve PDEs with variable coefficients. The current modification for the (MLVIM) is based on coupling of the Variational method (VIM) and Laplace- method (LT). In our proposal there is no need to calculate Lagrange multiplier. We applied Laplace method to the problem .Furthermore, the nonlinear terms for this problem is solved using homotopy method (HPM). Some examples are taken to compare results between two methods and to verify the reliability of our present methods.
The huge amount of information in the internet makes rapid need of text
summarization. Text summarization is the process of selecting important sentences
from documents with keeping the main idea of the original documents. This paper
proposes a method depends on Technique for Order of Preference by Similarity to
Ideal Solution (TOPSIS). The first step in our model is based on extracting seven
features for each sentence in the documents set. Multiple Linear Regression (MLR)
is then used to assign a weight for the selected features. Then TOPSIS method
applied to rank the sentences. The sentences with high scores will be selected to be
included in the generated summary. The proposed model is evaluated using dataset
In this paper the modified trapezoidal rule is presented for solving Volterra linear Integral Equations (V.I.E) of the second kind and we noticed that this procedure is effective in solving the equations. Two examples are given with their comparison tables to answer the validity of the procedure.