The purpose of this paper is to investigate Religious Human Rights Violations. Religious liberty is essential for everyone, everywhere. Because religion is significant to everyone, everywhere. Humans are religious by nature. Our nature drives us to seek answers to deep questions about ultimate things. We cannot live a fully human life unless we are free to seek those answers and live according to the truths we discover. In this study, the researcher used Marsh and White model for analysis A sample of an offensive image of an Islamic concept from the book "Mohammad believes it or else." this book is a comic book written by pseudonym Abdullah Aziz and published by Crescent Moon Publishing company. The book is marked with anti- Mohammad, prejudice expressed in willful attempts to devalue Prophet Muhammad in order to avert the urban public from Islam. Such as the story of the People of the Cave and many other stories in the Noble Qur’an is misused to vilify Islam and its Prophet which proves the violation of religious human rights. The researcher chose topic one, dubbed "Allah's monkey men," to analyze by employing the Marsh and White Model (2003) to analyze the image and refute the selected claims.
n this study, data or X-ray images Fixable Image Transport System (FITS) of objects were analyzed, where energy was collected from the body by several sensors; each sensor receives energy within a specific range, and when energy was collected from all sensors, the image was formed carrying information about that body. The images can be transferred and stored easily. The images were analyzed using the DS9 program to obtain a spectrum for each object,an energy corresponding to the photons collected per second. This study analyzed images for two types of objects (globular and open clusters). The results showed that the five open star clusters contain roughly t
... Show More<p class="0abstract">Image denoising is a technique for removing unwanted signals called the noise, which coupling with the original signal when transmitting them; to remove the noise from the original signal, many denoising methods are used. In this paper, the Multiwavelet Transform (MWT) is used to denoise the corrupted image by Choosing the HH coefficient for processing based on two different filters Tri-State Median filter and Switching Median filter. With each filter, various rules are used, such as Normal Shrink, Sure Shrink, Visu Shrink, and Bivariate Shrink. The proposed algorithm is applied Salt& pepper noise with different levels for grayscale test images. The quality of the denoised image is evaluated by usi
... Show MoreGeneral Background: Deep image matting is a fundamental task in computer vision, enabling precise foreground extraction from complex backgrounds, with applications in augmented reality, computer graphics, and video processing. Specific Background: Despite advancements in deep learning-based methods, preserving fine details such as hair and transparency remains a challenge. Knowledge Gap: Existing approaches struggle with accuracy and efficiency, necessitating novel techniques to enhance matting precision. Aims: This study integrates deep learning with fusion techniques to improve alpha matte estimation, proposing a lightweight U-Net model incorporating color-space fusion and preprocessing. Results: Experiments using the AdobeComposition-1k
... Show MoreThis paper proposes a new method Object Detection in Skin Cancer Image, the minimum
spanning tree Detection descriptor (MST). This ObjectDetection descriptor builds on the
structure of the minimum spanning tree constructed on the targettraining set of Skin Cancer
Images only. The Skin Cancer Image Detection of test objects relies on their distances to the
closest edge of thattree. Our experimentsshow that the Minimum Spanning Tree (MST) performs
especially well in case of Fogginessimage problems and in highNoisespaces for Skin Cancer
Image.
The proposed method of Object Detection Skin Cancer Image wasimplemented and tested on
different Skin Cancer Images. We obtained very good results . The experiment showed that
Embedding an identifying data into digital media such as video, audio or image is known as digital watermarking. In this paper, a non-blind watermarking algorithm based on Berkeley Wavelet Transform is proposed. Firstly, the embedded image is scrambled by using Arnold transform for higher security, and then the embedding process is applied in transform domain of the host image. The experimental results show that this algorithm is invisible and has good robustness for some common image processing operations.
The art of preventing the detection of hidden information messages is the way that steganography work. Several algorithms have been proposed for steganographic techniques. A major portion of these algorithms is specified for image steganography because the image has a high level of redundancy. This paper proposed an image steganography technique using a dynamic threshold produced by the discrete cosine coefficient. After dividing the green and blue channel of the cover image into 1*3-pixel blocks, check if any bits of green channel block less or equal to threshold then start to store the secret bits in blue channel block, and to increase the security not all bits in the chosen block used to store the secret bits. Firstly, store in the cente
... Show More<p>In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on
... Show MoreColor image compression is a good way to encode digital images by decreasing the number of bits wanted to supply the image. The main objective is to reduce storage space, reduce transportation costs and maintain good quality. In current research work, a simple effective methodology is proposed for the purpose of compressing color art digital images and obtaining a low bit rate by compressing the matrix resulting from the scalar quantization process (reducing the number of bits from 24 to 8 bits) using displacement coding and then compressing the remainder using the Mabel ZF algorithm Welch LZW. The proposed methodology maintains the quality of the reconstructed image. Macroscopic and
A number of compression schemes were put forward to achieve high compression factors with high image quality at a low computational time. In this paper, a combined transform coding scheme is proposed which is based on discrete wavelet (DWT) and discrete cosine (DCT) transforms with an added new enhancement method, which is the sliding run length encoding (SRLE) technique, to further improve compression. The advantages of the wavelet and the discrete cosine transforms were utilized to encode the image. This first step involves transforming the color components of the image from RGB to YUV planes to acquire the advantage of the existing spectral correlation and consequently gaining more compression. DWT is then applied to the Y, U and V col
... Show MoreArtificial intelligence (AI) is entering many fields of life nowadays. One of these fields is biometric authentication. Palm print recognition is considered a fundamental aspect of biometric identification systems due to the inherent stability, reliability, and uniqueness of palm print features, coupled with their non-invasive nature. In this paper, we develop an approach to identify individuals from palm print image recognition using Orange software in which a hybrid of AI methods: Deep Learning (DL) and traditional Machine Learning (ML) methods are used to enhance the overall performance metrics. The system comprises of three stages: pre-processing, feature extraction, and feature classification or matching. The SqueezeNet deep le
... Show More