This paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one
... Show MoreThis research aims to know the intellectual picture the displaced people formed about aid organizations and determine whether they were positive or negative, the researchers used survey tool as standard to study the society represented by displaced people living in Baghdad camps from Shiites, Sunnis, Shabak, Turkmen, Christians, and Ezidis.
The researcher reached to important results and the most important thing he found is that displaced people living in camps included in this survey hold a positive opinion about organizations working to meet their demands but they complain about the shortfall in the health care side.
The research also found that displaced people from (Shabak, Turkmen, and Ezidi) minorities see that internati
In this paper, membrane-based computing image segmentation, both region-based and edge-based, is proposed for medical images that involve two types of neighborhood relations between pixels. These neighborhood relations—namely, 4-adjacency and 8-adjacency of a membrane computing approach—construct a family of tissue-like P systems for segmenting actual 2D medical images in a constant number of steps; the two types of adjacency were compared using different hardware platforms. The process involves the generation of membrane-based segmentation rules for 2D medical images. The rules are written in the P-Lingua format and appended to the input image for visualization. The findings show that the neighborhood relations between pixels o
... Show MoreThis article proposes a new technique for determining the rate of contamination. First, a generative adversarial neural network (ANN) parallel processing technique is constructed and trained using real and secret images. Then, after the model is stabilized, the real image is passed to the generator. Finally, the generator creates an image that is visually similar to the secret image, thus achieving the same effect as the secret image transmission. Experimental results show that this technique has a good effect on the security of secret information transmission and increases the capacity of information hiding. The metric signal of noise, a structural similarity index measure, was used to determine the success of colour image-hiding t
... Show MoreThe recent emergence of sophisticated Large Language Models (LLMs) such as GPT-4, Bard, and Bing has revolutionized the domain of scientific inquiry, particularly in the realm of large pre-trained vision-language models. This pivotal transformation is driving new frontiers in various fields, including image processing and digital media verification. In the heart of this evolution, our research focuses on the rapidly growing area of image authenticity verification, a field gaining immense relevance in the digital era. The study is specifically geared towards addressing the emerging challenge of distinguishing between authentic images and deep fakes – a task that has become critically important in a world increasingly reliant on digital med
... Show MoreHuge number of medical images are generated and needs for more storage capacity and bandwidth for transferring over the networks. Hybrid DWT-DCT compression algorithm is applied to compress the medical images by exploiting the features of both techniques. Discrete Wavelet Transform (DWT) coding is applied to image YCbCr color model which decompose image bands into four subbands (LL, HL, LH and HH). The LL subband is transformed into low and high frequency components using Discrete Cosine Transform (DCT) to be quantize by scalar quantization that was applied on all image bands, the quantization parameters where reduced by half for the luminance band while it is the same for the chrominance bands to preserve the image quality, the zig
... Show MoreThis article describes how to predict different types of multiple reflections in pre-track seismic data. The characteristics of multiple reflections can be expressed as a combination of the characteristics of primary reflections. Multiple velocities always come in lower magnitude than the primaries, this is the base for separating them during Normal Move Out correction. The muting procedure is applied in Time-Velocity analysis domain. Semblance plot is used to diagnose multiples availability and judgment for muting dimensions. This processing procedure is used to eliminate internal multiples from real 2D seismic data from southern Iraq in two stages. The first is conventional Normal Move Out correction and velocity auto picking and
... Show MoreCadastral maps are the main documents of ownership and plots of land, as it contribute to preserving the property rights of individuals and institutions. It indicates the size and shape of each parcel and reveals geographic relationships that affect property value. The Iraqi cadastral maps are in old coordinate system AL-nahrwan 1934 and lambert conformal conic projection. Therefore these maps are old and unfit for use. The main objective of this paper is to investigate the effect of cartographic properties on updating cadastral maps. This depends on studying the effect of conversion the projection and the datum of the cadastral maps of the study area from (datum: nahrwan34, projection: lambert confo
Design sampling plan was and still one of most importance subjects because it give lowest cost comparing with others, time live statistical distribution should be known to give best estimators for parameters of sampling plan and get best sampling plan.
Research dell with design sampling plan when live time distribution follow Logistic distribution with () as location and shape parameters, using these information can help us getting (number of groups, sample size) associated with reject or accept the Lot
Experimental results for simulated data shows the least number of groups and sample size needs to reject or accept the Lot with certain probability of
... Show More