A common approach to the color image compression was started by transform
the red, green, and blue or (RGB) color model to a desire color model, then applying
compression techniques, and finally retransform the results into RGB model In this
paper, a new color image compression method based on multilevel block truncation
coding (MBTC) and vector quantization is presented. By exploiting human visual
system response for color, bit allocation process is implemented to distribute the bits
for encoding in more effective away.
To improve the performance efficiency of vector quantization (VQ),
modifications have been implemented. To combines the simple computational and
edge preservation properties of MBTC with high compression ratio and good
subjective performance of modified VQ, a hybrid MBTC- modified VQ color image
compression method is presented. The analysis results have indicated the
performance of the suggested method is better, where the constructed images are less
distorted and compressed with higher factor(59:1).
There many methods for estimation of permeability. In this Paper, permeability has been estimated by two methods. The conventional and modified methods are used to calculate flow zone indicator (FZI). The hydraulic flow unit (HU) was identified by FZI technique. This technique is effective in predicting the permeability in un-cored intervals/wells. HU is related with FZI and rock quality index (RQI). All available cores from 7 wells (Su -4, Su -5, Su -7, Su -8, Su -9, Su -12, and Su -14) were used to be database for HU classification. The plot of probability cumulative of FZI is used. The plot of core-derived probability FZI for both modified and conventional method which indicates 4 Hu (A, B, C and D) for Nahr Umr forma
... Show MoreThe primary objective of this paper is to improve a biometric authentication and classification model using the ear as a distinct part of the face since it is unchanged with time and unaffected by facial expressions. The proposed model is a new scenario for enhancing ear recognition accuracy via modifying the AdaBoost algorithm to optimize adaptive learning. To overcome the limitation of image illumination, occlusion, and problems of image registration, the Scale-invariant feature transform technique was used to extract features. Various consecutive phases were used to improve classification accuracy. These phases are image acquisition, preprocessing, filtering, smoothing, and feature extraction. To assess the proposed
... Show MoreThe aim of this study is to investigate the existence of some heavy metals (lead, cadmium, chromium) in colored plastic table dishes and study the migration of these metals to the food meals and the affecting factors in migration , such as storage period and food temperature. Six kinds of colored plastic table dishes were collected from Baghdad markets. The heavy metals in table dishes and in the prepared food meals put in them were estimated using atomic absorption spectrophotometer (Shimadzu A5000). The results indicated the existence of lead in all samples (1.61_1.00 mg/ kg) and chromium in three samples (0.85_0.97 mg/ kg) while other samples are free of chromium, and cadmium. Investigating the migration of these metals to food at dif
... Show MoreIn the reverse engineering approach, a massive amount of point data is gathered together during data acquisition and this leads to larger file sizes and longer information data handling time. In addition, fitting of surfaces of these data point is time-consuming and demands particular skills. In the present work a method for getting the control points of any profile has been presented. Where, many process for an image modification was explained using Solid Work program, and a parametric equation of the profile that proposed has been derived using Bezier technique with the control points that adopted. Finally, the proposed profile was machined using 3-aixs CNC milling machine and a compression in dimensions process has been occurred betwe
... Show MoreGroupwise non-rigid image alignment is a difficult non-linear optimization problem involving many parameters and often large datasets. Previous methods have explored various metrics and optimization strategies. Good results have been previously achieved with simple metrics, requiring complex optimization, often with many unintuitive parameters that require careful tuning for each dataset. In this chapter, the problem is restructured to use a simpler, iterative optimization algorithm, with very few free parameters. The warps are refined using an iterative Levenberg-Marquardt minimization to the mean, based on updating the locations of a small number of points and incorporating a stiffness constraint. This optimization approach is eff
... Show MoreIn this paper two main stages for image classification has been presented. Training stage consists of collecting images of interest, and apply BOVW on these images (features extraction and description using SIFT, and vocabulary generation), while testing stage classifies a new unlabeled image using nearest neighbor classification method for features descriptor. Supervised bag of visual words gives good result that are present clearly in the experimental part where unlabeled images are classified although small number of images are used in the training process.
Lowpass spatial filters are adopted to match the noise statistics of the degradation seeking
good quality smoothed images. This study imply different size and shape of smoothing
windows. The study shows that using a window square frame shape gives good quality
smoothing and at the same time preserving a certain level of high frequency components in
comparsion with standard smoothing filters.