A common approach to the color image compression was started by transform
the red, green, and blue or (RGB) color model to a desire color model, then applying
compression techniques, and finally retransform the results into RGB model In this
paper, a new color image compression method based on multilevel block truncation
coding (MBTC) and vector quantization is presented. By exploiting human visual
system response for color, bit allocation process is implemented to distribute the bits
for encoding in more effective away.
To improve the performance efficiency of vector quantization (VQ),
modifications have been implemented. To combines the simple computational and
edge preservation properties of MBTC with high compression ratio and good
subjective performance of modified VQ, a hybrid MBTC- modified VQ color image
compression method is presented. The analysis results have indicated the
performance of the suggested method is better, where the constructed images are less
distorted and compressed with higher factor(59:1).
Fingerprints are commonly utilized as a key technique and for personal recognition and in identification systems for personal security affairs. The most widely used fingerprint systems utilizing the distribution of minutiae points for fingerprint matching and representation. These techniques become unsuccessful when partial fingerprint images are capture, or the finger ridges suffer from lot of cuts or injuries or skin sickness. This paper suggests a fingerprint recognition technique which utilizes the local features for fingerprint representation and matching. The adopted local features have determined using Haar wavelet subbands. The system was tested experimentally using FVC2004 databases, which consists of four datasets, each set holds
... Show MoreWith the rapid development of computers and network technologies, the security of information in the internet becomes compromise and many threats may affect the integrity of such information. Many researches are focused theirs works on providing solution to this threat. Machine learning and data mining are widely used in anomaly-detection schemes to decide whether or not a malicious activity is taking place on a network. In this paper a hierarchical classification for anomaly based intrusion detection system is proposed. Two levels of features selection and classification are used. In the first level, the global feature vector for detection the basic attacks (DoS, U2R, R2L and Probe) is selected. In the second level, four local feature vect
... Show MoreUltimate oil recovery and displacement efficiency at the pore-scale are controlled by the rock wettability thus there is a growing interest in the wetting behaviour of reservoir rocks as production from fractured oil-wet or mixed-wet limestone formations have remained a key challenge. Conventional waterflooding methods are inefficient in such formation due to poor spontaneous imbibition of water into the oil-wet rock capillaries. However, altering the wettability to water-wet could yield recovery of significant amounts of additional oil thus this study investigates the influence of nanoparticles on wettability alteration. The efficiency of various formulated zirconium-oxide (ZrO2) based nanofluids at different nanoparticle concentrations (0
... Show MoreAbstract
β-thalassemia major is a genetic disease that causes sever defect in normal hemoglobin synthesis. The patients with β-thalassemia major need periodic blood transfusions that can result in accumulation of body iron, so treatment with iron chelating agent is required. Complications of this iron overload affecting many vital organs, including the liver. The aim of this work was to evaluate liver enzymes in β -thalassemia major patients with deferasirox versus without it. Two groups of β-thalassemia major patients were involved in this study named group A; 40 β-thalassemia patients of blood transfusion dependent without deferasirox, group B; 40 β-thalassemia patients of blood transfusion dependent on de
... Show MoreWe consider the problem of calibrating range measurements of a Light Detection and Ranging (lidar) sensor that is dealing with the sensor nonlinearity and heteroskedastic, range-dependent, measurement error. We solved the calibration problem without using additional hardware, but rather exploiting assumptions on the environment surrounding the sensor during the calibration procedure. More specifically we consider the assumption of calibrating the sensor by placing it in an environment so that its measurements lie in a 2D plane that is parallel to the ground. Then, its measurements come from fixed objects that develop orthogonally w.r.t. the ground, so that they may be considered as fixed points in an inertial reference frame. Moreov
... Show MoreOne of the principle concepts to understand any hydrocarbon field is the heterogeneity scale; This becomes particularly challenging in supergiant oil fields with medium to low lateral connectivity and carbonate reservoir rocks.
The main objectives of this study is to quantify the value of the heterogeneity for any well in question, and propagate it to the full reservoir. This is a quite useful specifically prior to conducting detailed water flooding or full field development studies and work, in order to be prepared for a proper design and exploitation requirements that fit with the level of heterogeneity of this formation.
Microalgae have been increasingly used for wastewater treatment due to their capacity to assimilate nutrients. Samples of wastewater were taken from the Erbil wastewater channel near Dhahibha village in northern Iraq. The microalga Coelastrella sp. was used in three doses (0.2, 1, and 2g. l-1) in this experiment for 21 days, samples were periodically (every 3 days) analyzed for physicochemical parameters such as pH, EC, Phosphate, Nitrate, and BOD5, in addition to, Chlorophyll a concentration. Results showed that the highest dose 2g.l-1 was the most effective dose for removing nutrients, confirmed by significant differences (p≤0.05) between all doses. The highest removal percentage was
... Show MoreComputer vision seeks to mimic the human visual system and plays an essential role in artificial intelligence. It is based on different signal reprocessing techniques; therefore, developing efficient techniques becomes essential to achieving fast and reliable processing. Various signal preprocessing operations have been used for computer vision, including smoothing techniques, signal analyzing, resizing, sharpening, and enhancement, to reduce reluctant falsifications, segmentation, and image feature improvement. For example, to reduce the noise in a disturbed signal, smoothing kernels can be effectively used. This is achievedby convolving the distributed signal with smoothing kernels. In addition, orthogonal moments (OMs) are a cruc
... Show More