In this paper, we describe a new method for image denoising. We analyze properties of the Multiwavelet coefficients of natural images. Also it suggests a method for computing the Multiwavelet transform using the 1st order approximation. This paper describes a simple and effective model for noise removal through suggesting a new technique for retrieving the image by allowing us to estimate it from the noisy image. The proposed algorithm depends on mixing both soft-thresholds with Mean filter and applying concurrently on noisy image by dividing into blocks of equal size (for concurrent processed to increase the performance of the enhancement process and to decease the time that is needed for implementation by applying the proposed algorithm on all four blocks concurrently) which are employed in order to remove the noise. The proposed method of image retrieving and smoothing outperforms the conventional methods that are used for image enhancement. The suggested algorithm and the evaluation test carried using Delphi V.5 package
Vitamin D3 deficiency is regarded as a public health issue in Iraq, particularly during the winter. Sun exposure is the main source of vitamin D3, where the surface ultraviolet (UV) radiation plays an important role in human health. The amount of time that must be spent in the sun each day was determined for the amount of exposed skin, for all skin types, with and without sunscreen under clear sky conditions in the city of Baghdad (Long 44.375, Lat 33.375). UV index data was obtained by TEMIS satellite during the year 2021. From data analysis, we found that most days during the year were within the high level of ultraviolet radiation values in the city of Baghdad, and most of them were during the summer, where the person n
... Show MoreIn the image processing’s field and computer vision it’s important to represent the image by its information. Image information comes from the image’s features that extracted from it using feature detection/extraction techniques and features description. Features in computer vision define informative data. For human eye its perfect to extract information from raw image, but computer cannot recognize image information. This is why various feature extraction techniques have been presented and progressed rapidly. This paper presents a general overview of the feature extraction categories for image.
Image compression plays an important role in reducing the size and storage of data while increasing the speed of its transmission through the Internet significantly. Image compression is an important research topic for several decades and recently, with the great successes achieved by deep learning in many areas of image processing, especially image compression, and its use is increasing Gradually in the field of image compression. The deep learning neural network has also achieved great success in the field of processing and compressing various images of different sizes. In this paper, we present a structure for image compression based on the use of a Convolutional AutoEncoder (CAE) for deep learning, inspired by the diversity of human eye
... Show MoreIn this paper, a new modification was proposed to enhance the security level in the Blowfish algorithm by increasing the difficulty of cracking the original message which will lead to be safe against unauthorized attack. This algorithm is a symmetric variable-length key, 64-bit block cipher and it is implemented using gray scale images of different sizes. Instead of using a single key in cipher operation, another key (KEY2) of one byte length was used in the proposed algorithm which has taken place in the Feistel function in the first round both in encryption and decryption processes. In addition, the proposed modified Blowfish algorithm uses five Sboxes instead of four; the additional key (KEY2) is selected randomly from additional Sbox
... Show MoreThe advancements in Information and Communication Technology (ICT), within the previous decades, has significantly changed people’s transmit or store their information over the Internet or networks. So, one of the main challenges is to keep these information safe against attacks. Many researchers and institutions realized the importance and benefits of cryptography in achieving the efficiency and effectiveness of various aspects of secure communication.This work adopts a novel technique for secure data cryptosystem based on chaos theory. The proposed algorithm generate 2-Dimensional key matrix having the same dimensions of the original image that includes random numbers obtained from the 1-Dimensional logistic chaotic map for given con
... Show MoreIn this paper, we have employed a computation of three technique to reduce the computational complexity and bit rate for compressed image. These techniques are bit plane coding based on two absolute values, vector quantization VQ technique using Cache codebook and Weber's low condition. The experimental results show that the proposed techniques achieve reduce the storage size of bit plane and low computational complexity.
This research involves studying the influence of increasing the
number of Gaussian points and the style of their distribution, on a circular exit pupil, on the numerical calculations accuracy of the point spread function for an ideal optical system and another system having focus error of (0.25 A. and 0.5 A. )
It was shown that the accuracy of the results depends on the type of
distributing points on the exit pupil. Also, the accuracy increases with the increase of the number of points (N) and the increase of aberrations which requires on increas (N).
The digital multimedia systems become standard at this time because of their extremely sensory activity effects and also the advanced development in its corresponding technology. Recently, biological techniques applied to several varieties of applications such as authentication protocols, organic chemistry, and cryptography. Deoxyribonucleic Acid (DNA) is a tool to hide the key information in multimedia platforms.
In this paper, an embedding algorithm is introduced; first, the image is divided into equally sized blocks, these blocks checked for a small amount color in all the separated blocks. The selected blocks are used to localize the necessary image information. In the second stage, a comparison is between the initial image pixel
Image quality has been estimated and predicted using the signal to noise ratio (SNR). The purpose of this study is to investigate the relationships between body mass index (BMI) and SNR measurements in PET imaging using patient studies with liver cancer. Three groups of 59 patients (24 males and 35 females) were divided according to BMI. After intravenous injection of 0.1 mCi of 18F-FDG per kilogram of body weight, PET emission scans were acquired for (1, 1.5, and 3) min/bed position according to the weight of patient. Because liver is an organ of homogenous metabolism, five region of interest (ROI) were made at the same location, five successive slices of the PET/CT scans to determine the mean uptake (signal) values and its standard deviat
... Show More