In the image processing’s field and computer vision it’s important to represent the image by its information. Image information comes from the image’s features that extracted from it using feature detection/extraction techniques and features description. Features in computer vision define informative data. For human eye its perfect to extract information from raw image, but computer cannot recognize image information. This is why various feature extraction techniques have been presented and progressed rapidly. This paper presents a general overview of the feature extraction categories for image.
A collection of 118 specimens of Iraqi phasianid birds belong to four species was examined
for haematozoa. Results show that 21.2% of them were infected with one or more of four
species of blood parasites; Haemoproteus danilewskyi, H. santosdiasi, Plasmodium sp. and
microfilaria. Haemoproteus danilewskyi is reported here for the first time in Iraq.
Liquid electrodes of domperidone maleate (DOMP) imprinted polymer were synthesis based on precipitation polymerization mechanism. The molecularly imprinted (MIP) and non-imprinted (NIP) polymers were synthesized using DOMP as a template. By methyl methacrylate (MMA) as monomer, N,Nmethylenebisacrylamide (NMAA) and ethylene glycol dimethacrylate (EGDMA) as cross-linkers and benzoyl peroxide (BP) as an initiator. The molecularly imprinted membranes were synthesis using acetophenone (APH), di-butyl sabacate (DBS), Di octylphthalate (DOPH) and triolyl phosphate (TP)as plasticizers in PVC matrix. The slopes and limit of detection of l
... Show MoreThis investigation showed (31) species belonging to (15) genera under (five) families and two orders. The leafminers Dipter families (Agromozidae, Anthomyiidae, Drosophilidae), Agromyzid flies is the highest level of investigated many host plants, but other families have lowest host plants. The synonyms of species were provided from GBIF scarlet's. The date and localities of sampling collection were recorded.
In this paper, visible image watermarking algorithm based on biorthogonal wavelet
transform is proposed. The watermark (logo) of type binary image can be embedded in the
host gray image by using coefficients bands of the transformed host image by biorthogonal
transform domain. The logo image can be embedded in the top-left corner or spread over the
whole host image. A scaling value (α) in the frequency domain is introduced to control the
perception of the watermarked image. Experimental results show that this watermark
algorithm gives visible logo with and no losses in the recovery process of the original image,
the calculated PSNR values support that. Good robustness against attempt to remove the
watermark was s
WA Shukur, FA Abdullatif, Ibn Al-Haitham Journal For Pure and Applied Sciences, 2011 With wide spread of internet, and increase the price of information, steganography become very important to communication. Over many years used different types of digital cover to hide information as a cover channel, image from important digital cover used in steganography because widely use in internet without suspicious.
JPEG is most popular image compression and encoding, this technique is widely used in many applications (images, videos and 3D animations). Meanwhile, researchers are very interested to develop this massive technique to compress images at higher compression ratios with keeping image quality as much as possible. For this reason in this paper we introduce a developed JPEG based on fast DCT and removed most of zeros and keeps their positions in a transformed block. Additionally, arithmetic coding applied rather than Huffman coding. The results showed up, the proposed developed JPEG algorithm has better image quality than traditional JPEG techniques.
This paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one
... Show MoreMany image processing and machine learning applications require sufficient image feature selection and representation. This can be achieved by imitating human ability to process visual information. One such ability is that human eyes are much more sensitive to changes in the intensity (luminance) than the color information. In this paper, we present how to exploit luminance information, organized in a pyramid structure, to transfer properties between two images. Two applications are presented to demonstrate the results of using luminance channel in the similarity metric of two images. These are image generation; where a target image is to be generated from a source one, and image colorization; where color information is to be browsed from o
... Show More