In this paper three techniques for image compression are implemented. The proposed techniques consist of three dimension (3-D) two level discrete wavelet transform (DWT), 3-D two level discrete multi-wavelet transform (DMWT) and 3-D two level hybrid (wavelet-multiwavelet transform) technique. Daubechies and Haar are used in discrete wavelet transform and Critically Sampled preprocessing is used in discrete multi-wavelet transform. The aim is to maintain to increase the compression ratio (CR) with respect to increase the level of the transformation in case of 3-D transformation, so, the compression ratio is measured for each level. To get a good compression, the image data properties, were measured, such as, image entropy (He), percent root-mean-square difference (PRD %), energy retained (Er) and Peak Signal to Noise Ratio (PSNR). Based on testing results, a comparison between the three techniques is presented. CR in the three techniques is the same and has the largest value in the 2nd level of 3-D. The hybrid technique has the highest PSNR values in the 1st and 2nd level of 3-D and has the lowest values of (PRD %). so, the 3-D 2-level hybrid is the best technique for image compression.
Information security in data storage and transmission is increasingly important. On the other hand, images are used in many procedures. Therefore, preventing unauthorized access to image data is crucial by encrypting images to protect sensitive data or privacy. The methods and algorithms for masking or encoding images vary from simple spatial-domain methods to frequency-domain methods, which are the most complex and reliable. In this paper, a new cryptographic system based on the random key generator hybridization methodology by taking advantage of the properties of Discrete Cosine Transform (DCT) to generate an indefinite set of random keys and taking advantage of the low-frequency region coefficients after the DCT stage to pass them to
... Show MoreSo far synthesis of Gonadotropin Releasing Hormone (GnRH) analogues reported in the literature has clarified some aspects of structural activity of the naturally released GnRH. As a part of continuing efforts for further understanding of this relationship, the present investigation was undertaken which involved synthesis and biological evaluation of two GnRH analogues, firstly, by replacement of the amino acid L-Argenine in the 8th position at the backbone structure of the natural hormone by the amino acid D-Alanine; and secondly, by replacement of the amino acid L-Glycine in the 10th position by D-Alanine also at the backbone structure of the nature hormone, to obtain the following analogues respectively:
P
... Show MoreElectrocardiogram (ECG) is an important physiological signal for cardiac disease diagnosis. With the increasing use of modern electrocardiogram monitoring devices that generate vast amount of data requiring huge storage capacity. In order to decrease storage costs or make ECG signals suitable and ready for transmission through common communication channels, the ECG data
volume must be reduced. So an effective data compression method is required. This paper presents an efficient technique for the compression of ECG signals. In this technique, different transforms have been used to compress the ECG signals. At first, a 1-D ECG data was segmented and aligned to a 2-D data array, then 2-D mixed transform was implemented to compress the
Image segmentation using bi-level thresholds works well for straightforward scenarios; however, dealing with complex images that contain multiple objects or colors presents considerable computational difficulties. Multi-level thresholding is crucial for these situations, but it also introduces a challenging optimization problem. This paper presents an improved Reptile Search Algorithm (RSA) that includes a Gbest operator to enhance its performance. The proposed method determines optimal threshold values for both grayscale and color images, utilizing entropy-based objective functions derived from the Otsu and Kapur techniques. Experiments were carried out on 16 benchmark images, which inclu
This study explores the challenges in Artificial Intelligence (AI) systems in generating image captions, a task that requires effective integration of computer vision and natural language processing techniques. A comparative analysis between traditional approaches such as retrieval- based methods and linguistic templates) and modern approaches based on deep learning such as encoder-decoder models, attention mechanisms, and transformers). Theoretical results show that modern models perform better for the accuracy and the ability to generate more complex descriptions, while traditional methods outperform speed and simplicity. The paper proposes a hybrid framework that combines the advantages of both approaches, where conventional methods prod
... Show MoreAttacking a transferred data over a network is frequently happened millions time a day. To address this problem, a secure scheme is proposed which is securing a transferred data over a network. The proposed scheme uses two techniques to guarantee a secure transferring for a message. The message is encrypted as a first step, and then it is hided in a video cover. The proposed encrypting technique is RC4 stream cipher algorithm in order to increase the message's confidentiality, as well as improving the least significant bit embedding algorithm (LSB) by adding an additional layer of security. The improvement of the LSB method comes by replacing the adopted sequential selection by a random selection manner of the frames and the pixels wit
... Show MoreIn the image processing’s field and computer vision it’s important to represent the image by its information. Image information comes from the image’s features that extracted from it using feature detection/extraction techniques and features description. Features in computer vision define informative data. For human eye its perfect to extract information from raw image, but computer cannot recognize image information. This is why various feature extraction techniques have been presented and progressed rapidly. This paper presents a general overview of the feature extraction categories for image.