This paper presents a combination of enhancement techniques for fingerprint images affected by different type of noise. These techniques were applied to improve image quality and come up with an acceptable image contrast. The proposed method included five different enhancement techniques: Normalization, Histogram Equalization, Binarization, Skeletonization and Fusion. The Normalization process standardized the pixel intensity which facilitated the processing of subsequent image enhancement stages. Subsequently, the Histogram Equalization technique increased the contrast of the images. Furthermore, the Binarization and Skeletonization techniques were implemented to differentiate between the ridge and valley structures and to obtain one pixel-wide lines. Finally, the Fusion technique was used to merge the results of the Histogram Equalization process with the Skeletonization process to obtain the new high contrast images. The proposed method was tested in different quality images from National Institute of Standard and Technology (NIST) special database 14. The experimental results are very encouraging and the current enhancement method appeared to be effective by improving different quality images.
The aim of this paper is to propose an efficient three steps iterative method for finding the zeros of the nonlinear equation f(x)=0 . Starting with a suitably chosen , the method generates a sequence of iterates converging to the root. The convergence analysis is proved to establish its five order of convergence. Several examples are given to illustrate the efficiency of the proposed new method and its comparison with other methods.
TI1e Web service securi ty challenge is to understand and assess the risk involved in securing a web-based service today, based on our existing security technology, and at the same time tmck emerging standards and understand how they will be used to offset the risk in
new web services. Any security model must i llustrate how data can
now through an application and network topology to meet the
requirements defined by the busi ness wi thout exposing the data to undue risk. In this paper we propose &n
... Show MoreThis manuscript presents a new approach to accurately calculating exponential integral function that arises in many applications such as contamination, groundwater flow, hydrological problems and mathematical physics. The calculation is obtained with easily computed components without any restrictive assumptions
A detailed comparison of the execution times is performed. The calculated results by the suggested approach are better and faster accuracy convergence than those calculated by other methods. Error analysis of the calculations is studied using the absolute error and high convergence is achieved. The suggested approach out-performs all previous methods used to calculate this function and this decision is
... Show MoreIn this study, we have created a new Arabic dataset annotated according to Ekman’s basic emotions (Anger, Disgust, Fear, Happiness, Sadness and Surprise). This dataset is composed from Facebook posts written in the Iraqi dialect. We evaluated the quality of this dataset using four external judges which resulted in an average inter-annotation agreement of 0.751. Then we explored six different supervised machine learning methods to test the new dataset. We used Weka standard classifiers ZeroR, J48, Naïve Bayes, Multinomial Naïve Bayes for Text, and SMO. We also used a further compression-based classifier called PPM not included in Weka. Our study reveals that the PPM classifier significantly outperforms other classifiers such as SVM and N
... Show MoreThe process of accurate localization of the basic components of human faces (i.e., eyebrows, eyes, nose, mouth, etc.) from images is an important step in face processing techniques like face tracking, facial expression recognition or face recognition. However, it is a challenging task due to the variations in scale, orientation, pose, facial expressions, partial occlusions and lighting conditions. In the current paper, a scheme includes the method of three-hierarchal stages for facial components extraction is presented; it works regardless of illumination variance. Adaptive linear contrast enhancement methods like gamma correction and contrast stretching are used to simulate the variance in light condition among images. As testing material
... Show MoreSteganography is the art of secret communication. Its purpose is to hide the presence of information, using, for example, images as covers. The frequency domain is well suited for embedding in image, since hiding in this frequency domain coefficients is robust to many attacks. This paper proposed hiding a secret image of size equal to quarter of the cover one. Set Partitioning in Hierarchal Trees (SPIHT) codec is used to code the secret image to achieve security. The proposed method applies Discrete Multiwavelet Transform (DMWT) for cover image. The coded bit stream of the secret image is embedded in the high frequency subbands of the transformed cover one. A scaling factors ? and ? in frequency domain control the quality of the stego
... Show MoreIn this paper, a robust invisible watermarking system for digital video encoded by MPEG-4 is presented. The proposed scheme provides watermark hidden by embedding a secret message (watermark) in the sprite area allocated in reference frame (I-frame). The proposed system consists of two main units: (i) Embedding unit and (ii) Extraction unit. In the embedding unit, the system allocates the sprite blocks using motion compensation information. The allocated sprite area in each I–frame is used as hosting area for embedding watermark data. In the extraction unit, the system extracts the watermark data in order to check authentication and ownership of the video. The watermark data embedding method is Blocks average modulation applied on RGB dom
... Show MoreThe present paper concern with minimax shrinkage estimator technique in order to estimate Burr X distribution shape parameter, when prior information about the real shape obtainable as original estimate while known scale parameter.
Derivation for Bias Ratio, Mean squared error and the Relative Efficiency equations.
Numerical results and conclusions for the expressions mentioned above were displayed. Comparisons for proposed estimator with most recent works were made.
This work represents development and implementation a programmable model for evaluating pumping technique and spectroscopic properties of solid state laser, as well as designing and constructing a suitable software program to simulate this techniques . A study of a new approach for Diode Pumped Solid State Laser systems (DPSSL), to build the optimum path technology and to manufacture a new solid state laser gain medium. From this model the threshold input power, output power optimum transmission, slop efficiency and available power were predicted. different systems configuration of diode pumped solid state laser for side pumping, end pump method using different shape type (rod,slab,disk) three main parameters are (energy transfer efficie
... Show More