Cyber-attacks keep growing. Because of that, we need stronger ways to protect pictures. This paper talks about DGEN, a Dynamic Generative Encryption Network. It mixes Generative Adversarial Networks with a key system that can change with context. The method may potentially mean it can adjust itself when new threats appear, instead of a fixed lock like AES. It tries to block brute‑force, statistical tricks, or quantum attacks. The design adds randomness, uses learning, and makes keys that depend on each image. That should give very good security, some flexibility, and keep compute cost low. Tests still ran on several public image sets. Results show DGEN beats AES, chaos tricks, and other GAN ideas. Entropy reached 7.99 bits per pixel, correlation dropped 0.002, and the avalanche effect was 95.4 percent. Encrypting a surveillance frame took 7.5 ms, while the picture quality stayed high, with PSNR 39.7 dB and SSIM 99.2. These numbers suggest the tool can still work in real time and scale up significantly. The study also looks at how DGEN could fit with quantum computers and federated learning, hinting it might be a very big step forward for safe image handling.
Drought is a natural phenomenon in many arid, semi-arid, or wet regions. This showed that no region worldwide is excluded from the occurrence of drought. Extreme droughts were caused by global weather warming and climate change. Therefore, it is essential to review the studies conducted on drought to use the recommendations made by the researchers on drought. The drought was classified into meteorological, agricultural, hydrological, and economic-social. In addition, researchers described the severity of the drought by using various indices which required different input data. The indices used by various researchers were the Joint Deficit Index (JDI), Effective Drought Index (EDI), Streamflow Drought Index (SDI), Sta
... Show MoreThe objective of this study is to apply Artificial Neural Network for heat transfer analysis of shell-and-tube heat exchangers widely used in power plants and refineries. Practical data was obtained by using industrial heat exchanger operating in power generation department of Dura refinery. The commonly used Back Propagation (BP) algorithm was used to train and test networks by divided the data to three samples (training, validation and testing data) to give more approach data with actual case. Inputs of the neural network include inlet water temperature, inlet air temperature and mass flow rate of air. Two outputs (exit water temperature to cooling tower and exit air temperature to second stage of air compressor) were taken in ANN.
... Show MoreIn this paper, a method for hiding cipher text in an image file is introduced . The
proposed method is to hide the cipher text message in the frequency domain of the image.
This method contained two phases: the first is embedding phase and the second is extraction
phase. In the embedding phase the image is transformed from time domain to frequency
domain using discrete wavelet decomposition technique (Haar). The text message encrypted
using RSA algorithm; then Least Significant Bit (LSB) algorithm used to hide secret message
in high frequency. The proposed method is tested in different images and showed success in
hiding information according to the Peak Signal to Noise Ratio (PSNR) measure of the the
original ima
In this paper, we introduce a DCT based steganographic method for gray scale images. The embedding approach is designed to reach efficient tradeoff among the three conflicting goals; maximizing the amount of hidden message, minimizing distortion between the cover image and stego-image,and maximizing the robustness of embedding. The main idea of the method is to create a safe embedding area in the middle and high frequency region of the DCT domain using a magnitude modulation technique. The magnitude modulation is applied using uniform quantization with magnitude Adder/Subtractor modules. The conducted test results indicated that the proposed method satisfy high capacity, high preservation of perceptual and statistical properties of the steg
... Show MoreImage segmentation can be defined as a cutting or segmenting process of the digital image into many useful points which are called segmentation, that includes image elements contribute with certain attributes different form Pixel that constitute other parts. Two phases were followed in image processing by the researcher in this paper. At the beginning, pre-processing image on images was made before the segmentation process through statistical confidence intervals that can be used for estimate of unknown remarks suggested by Acho & Buenestado in 2018. Then, the second phase includes image segmentation process by using "Bernsen's Thresholding Technique" in the first phase. The researcher drew a conclusion that in case of utilizing
... Show MoreRecently, Image enhancement techniques can be represented as one of the most significant topics in the field of digital image processing. The basic problem in the enhancement method is how to remove noise or improve digital image details. In the current research a method for digital image de-noising and its detail sharpening/highlighted was proposed. The proposed approach uses fuzzy logic technique to process each pixel inside entire image, and then take the decision if it is noisy or need more processing for highlighting. This issue is performed by examining the degree of association with neighboring elements based on fuzzy algorithm. The proposed de-noising approach was evaluated by some standard images after corrupting them with impulse
... Show MoreThis paper introduces an innovative method for image encryption called "Two-Fold Cryptography," which leverages the Henon map in a dual-layer encryption framework. By applying two distinct encryption processes, this approach offers enhanced security for images. Key parameters generated by the Henon map dynamically shape both stages of encryption, creating a sophisticated and robust security system. The findings reveal that Two-Fold Cryptography provides a notable improvement in image protection, outperforming traditional single-layer encryption techniques.