In this paper, membrane-based computing image segmentation, both region-based and edge-based, is proposed for medical images that involve two types of neighborhood relations between pixels. These neighborhood relations—namely, 4-adjacency and 8-adjacency of a membrane computing approach—construct a family of tissue-like P systems for segmenting actual 2D medical images in a constant number of steps; the two types of adjacency were compared using different hardware platforms. The process involves the generation of membrane-based segmentation rules for 2D medical images. The rules are written in the P-Lingua format and appended to the input image for visualization. The findings show that the neighborhood relations between pixels of 8-adjacency give better results compared with the 4-adjacency neighborhood relations, because the 8-adjacency considers the eight pixels around the center pixel, which reduces the required communication rules to obtain the final segmentation results. The experimental results proved that the proposed approach has superior results in terms of the number of computational steps and processing time. To the best of our knowledge, this is the first time an evaluation procedure is conducted to evaluate the efficiency of real image segmentations using membrane computing.
Fatty Acid Methyl Ester (FAME) produced from biomass offers several advantages such as renewability and sustainability. The typical production process of FAME is accompanied by various impurities such as alcohol, soap, glycerol, and the spent catalyst. Therefore, the most challenging part of the FAME production is the purification process. In this work, a novel application of bulk liquid membrane (BLM) developed from conventional solvent extraction methods was investigated for the removal of glycerol from FAME. The extraction and stripping processes are combined into a single system, allowing for simultaneous solvent recovery whereby low-cost quaternary ammonium salt-glycerol-based deep eutectic solvent (DES) is used as the membrane phase.
... Show MoreTwo molecular imprinted polymer (MIP) membranes for Levofloxacin (LEV) were prepared based on PVC matrix. The imprinted polymers were prepared by polymerization of styrene (STY) as monomer, N,N methylene di acrylamide as a cross linker ,benzoyl peroxide (BPO) as an initiator and levofloxacin as a template. Di methyl adepate (DMA) and acetophenone (AOPH) were used as plasticizers , the molecular imprinted membranes and the non molecular imprinted membranes were prepared. The slopes and detection limits of the liquid electrodes ranged from -21.96 – -19.38 mV/decade and 2×10-4M- 4×10-4M, and Its response time was around 1 minute, respectively. The liquid electrodes were packed with 0.1 M standar
... Show MoreThe Internet image retrieval is an interesting task that needs efforts from image processing and relationship structure analysis. In this paper, has been proposed compressed method when you need to send more than a photo via the internet based on image retrieval. First, face detection is implemented based on local binary patterns. The background is notice based on matching global self-similarities and compared it with the rest of the image backgrounds. The propose algorithm are link the gap between the present image indexing technology, developed in the pixel domain, and the fact that an increasing number of images stored on the computer are previously compressed by JPEG at the source. The similar images are found and send a few images inst
... Show MoreData <span>transmission in orthogonal frequency division multiplexing (OFDM) system needs source and channel coding, the transmitted data suffers from the bad effect of large peak to average power ratio (PAPR). Source code and channel codes can be joined using different joined codes. Variable length error correcting code (VLEC) is one of these joined codes. VLEC is used in mat lab simulation for image transmission in OFDM system, different VLEC code length is used and compared to find that the PAPR decreased with increasing the code length. Several techniques are used and compared for PAPR reduction. The PAPR of OFDM signal is measured for image coding with VLEC and compared with image coded by Huffman source coding and Bose-
... Show MoreThe digital camera which contain light unit inside it is useful with low illumination but not for high. For different intensity; the quality of the image will not stay good but it will have dark or low intensity so we can not change the contrast and the intensity in order to increase the losses information in the bright and the dark regions. . In this search we study the regular illumination on the images using the tungsten light by changing the intensities. The result appears that the tungsten light gives nearly far intensity for the three color bands(RGB) and the illuminated band(L).the result depend on the statistical properties which represented by the voltage ,power and intensities and the effect of this parameter on the digital
... Show MoreCyber-attacks keep growing. Because of that, we need stronger ways to protect pictures. This paper talks about DGEN, a Dynamic Generative Encryption Network. It mixes Generative Adversarial Networks with a key system that can change with context. The method may potentially mean it can adjust itself when new threats appear, instead of a fixed lock like AES. It tries to block brute‑force, statistical tricks, or quantum attacks. The design adds randomness, uses learning, and makes keys that depend on each image. That should give very good security, some flexibility, and keep compute cost low. Tests still ran on several public image sets. Results show DGEN beats AES, chaos tricks, and other GAN ideas. Entropy reached 7.99 bits per pix
... Show MoreThis study deals with segmenting the industrial market as an independent variable and targeting the industrial market as a dependent variable. Since the industrial sector represents one of the most important fundamental pillars to build the economies of countries and their development , the Iraqi industrial sector was chosen as a population for the study . Based on measuring the study variables , identifying them and testing the correlation and effect on each other , the study reached a group of findings:
1- Increasing the level of availability of study variables inside the companies “The study sample”.
2- There is a correlation between the independent v
... Show MoreIn many scientific fields, Bayesian models are commonly used in recent research. This research presents a new Bayesian model for estimating parameters and forecasting using the Gibbs sampler algorithm. Posterior distributions are generated using the inverse gamma distribution and the multivariate normal distribution as prior distributions. The new method was used to investigate and summaries Bayesian statistics' posterior distribution. The theory and derivation of the posterior distribution are explained in detail in this paper. The proposed approach is applied to three simulation datasets of 100, 300, and 500 sample sizes. Also, the procedure was extended to the real dataset called the rock intensity dataset. The actual dataset is collecte
... Show More