In this paper, membrane-based computing image segmentation, both region-based and edge-based, is proposed for medical images that involve two types of neighborhood relations between pixels. These neighborhood relations—namely, 4-adjacency and 8-adjacency of a membrane computing approach—construct a family of tissue-like P systems for segmenting actual 2D medical images in a constant number of steps; the two types of adjacency were compared using different hardware platforms. The process involves the generation of membrane-based segmentation rules for 2D medical images. The rules are written in the P-Lingua format and appended to the input image for visualization. The findings show that the neighborhood relations between pixels of 8-adjacency give better results compared with the 4-adjacency neighborhood relations, because the 8-adjacency considers the eight pixels around the center pixel, which reduces the required communication rules to obtain the final segmentation results. The experimental results proved that the proposed approach has superior results in terms of the number of computational steps and processing time. To the best of our knowledge, this is the first time an evaluation procedure is conducted to evaluate the efficiency of real image segmentations using membrane computing.
Artificial intelligence (AI) is entering many fields of life nowadays. One of these fields is biometric authentication. Palm print recognition is considered a fundamental aspect of biometric identification systems due to the inherent stability, reliability, and uniqueness of palm print features, coupled with their non-invasive nature. In this paper, we develop an approach to identify individuals from palm print image recognition using Orange software in which a hybrid of AI methods: Deep Learning (DL) and traditional Machine Learning (ML) methods are used to enhance the overall performance metrics. The system comprises of three stages: pre-processing, feature extraction, and feature classification or matching. The SqueezeNet deep le
... Show MoreBackground: techniques of image analysis have been used extensively to minimize interobserver variation of immunohistochemical scoring, yet; image acquisition procedures are often demanding, expensive and laborious. This study aims to assess the validity of image analysis to predict human observer’s score with a simplified image acquisition technique. Materials and methods: formalin fixed- paraffin embedded tissue sections for ameloblastomas and basal cell carcinomas were immunohistochemically stained with monoclonal antibodies to MMP-2 and MMP-9. The extent of antibody positivity was quantified using Imagej® based application on low power photomicrographs obtained with a conventional camera. Results of the software were employed
... Show MoreColor image compression is a good way to encode digital images by decreasing the number of bits wanted to supply the image. The main objective is to reduce storage space, reduce transportation costs and maintain good quality. In current research work, a simple effective methodology is proposed for the purpose of compressing color art digital images and obtaining a low bit rate by compressing the matrix resulting from the scalar quantization process (reducing the number of bits from 24 to 8 bits) using displacement coding and then compressing the remainder using the Mabel ZF algorithm Welch LZW. The proposed methodology maintains the quality of the reconstructed image. Macroscopic and
Embedding an identifying data into digital media such as video, audio or image is known as digital watermarking. In this paper, a non-blind watermarking algorithm based on Berkeley Wavelet Transform is proposed. Firstly, the embedded image is scrambled by using Arnold transform for higher security, and then the embedding process is applied in transform domain of the host image. The experimental results show that this algorithm is invisible and has good robustness for some common image processing operations.
The study focuses on assessment of the quality of some image enhancement methods which were implemented on renal X-ray images. The enhancement methods included Imadjust, Histogram Equalization (HE) and Contrast Limited Adaptive Histogram Equalization (CLAHE). The images qualities were calculated to compare input images with output images from these three enhancement techniques. An eight renal x-ray images are collected to perform these methods. Generally, the x-ray images are lack of contrast and low in radiation dosage. This lack of image quality can be amended by enhancement process. Three quality image factors were done to assess the resulted images involved (Naturalness Image Quality Evaluator (NIQE), Perception based Image Qual
... Show MoreA number of compression schemes were put forward to achieve high compression factors with high image quality at a low computational time. In this paper, a combined transform coding scheme is proposed which is based on discrete wavelet (DWT) and discrete cosine (DCT) transforms with an added new enhancement method, which is the sliding run length encoding (SRLE) technique, to further improve compression. The advantages of the wavelet and the discrete cosine transforms were utilized to encode the image. This first step involves transforming the color components of the image from RGB to YUV planes to acquire the advantage of the existing spectral correlation and consequently gaining more compression. DWT is then applied to the Y, U and V col
... Show MoreA simulation study of using 2D tomography to reconstruction a 3D object is presented. The 2D Radon transform is used to create a 2D projection for each slice of the 3D object at different heights. The 2D back-projection and the Fourier slice theorem methods are used to reconstruction each 2D projection slice of the 3D object. The results showed the ability of the Fourier slice theorem method to reconstruct the general shape of the body with its internal structure, unlike the 2D Radon method, which was able to reconstruct the general shape of the body only because of the blurring artefact, Beside that the Fourier slice theorem could not remove all blurring artefact, therefore, this research, suggested the threshold technique to eliminate the
... Show More