In Automatic Speech Recognition (ASR) the non-linear data projection provided by a one hidden layer Multilayer Perceptron (MLP), trained to recognize phonemes, and has previous experiments to provide feature enhancement substantially increased ASR performance, especially in noise. Previous attempts to apply an analogous approach to speaker identification have not succeeded in improving performance, except by combining MLP processed features with other features. We present test results for the TIMIT database which show that the advantage of MLP preprocessing for open set speaker identification increases with the number of speakers used to train the MLP and that improved identification is obtained as this number increases beyond sixty. We also present a method for selecting the speakers used for MLP training which further improves identification performance.
Compressing the speech reduces the data storage requirements, leading to reducing the time of transmitting the digitized speech over long-haul links like internet. To obtain best performance in speech compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry.The MCT bases functions are derived from GHM bases function using 2D linear convolution .The fast computation algorithm methods introduced here added desirable features to the current transform. We further assess the performance of the MCT in speech compression application. This paper discusses the effect of using DWT and MCT (one and two dimension) on speech compression. DWT and MCT performances in terms of comp
... Show MoreEye Detection is used in many applications like pattern recognition, biometric, surveillance system and many other systems. In this paper, a new method is presented to detect and extract the overall shape of one eye from image depending on two principles Helmholtz & Gestalt. According to the principle of perception by Helmholz, any observed geometric shape is perceptually "meaningful" if its repetition number is very small in image with random distribution. To achieve this goal, Gestalt Principle states that humans see things either through grouping its similar elements or recognize patterns. In general, according to Gestalt Principle, humans see things through genera
... Show MoreThe penalized least square method is a popular method to deal with high dimensional data ,where the number of explanatory variables is large than the sample size . The properties of penalized least square method are given high prediction accuracy and making estimation and variables selection
At once. The penalized least square method gives a sparse model ,that meaning a model with small variables so that can be interpreted easily .The penalized least square is not robust ,that means very sensitive to the presence of outlying observation , to deal with this problem, we can used a robust loss function to get the robust penalized least square method ,and get robust penalized estimator and
... Show MoreA new distribution, the Epsilon Skew Gamma (ESΓ ) distribution, which was first introduced by Abdulah [1], is used on a near Gamma data. We first redefine the ESΓ distribution, its properties, and characteristics, and then we estimate its parameters using the maximum likelihood and moment estimators. We finally use these estimators to fit the data with the ESΓ distribution
In recent years, social media has been increasing widely and obviously as a media for users expressing their emotions and feelings through thousands of posts and comments related to tourism companies. As a consequence, it became difficult for tourists to read all the comments to determine whether these opinions are positive or negative to assess the success of a tourism company. In this paper, a modest model is proposed to assess e-tourism companies using Iraqi dialect reviews collected from Facebook. The reviews are analyzed using text mining techniques for sentiment classification. The generated sentiment words are classified into positive, negative and neutral comments by utilizing Rough Set Theory, Naïve Bayes and K-Nearest Neighbor
... Show MoreA newly developed FIA-merging zones spectrophotometric system, the method is rapid, accurate and sensitive for metformin hydrochloride determination through the oxidation of 1- naphthol by sodium hypochlorite and coupling with metformin.HCl in the presence of sodium hydroxide to form a blue soluble ion pair and this product was determined using homemade CFIA-Merging zones techniques , at 580 nm. Data treatment shows that linear range is (0.5-35) µg/ ml. The optimization conditions for various chemical and physical conditions of [MTF- NaOCl-α-naphthol-NaOH] system were investigated. The LOD was 0.01µg / ml and LOQ 0.1µg/ml from the lowest concentration of the calibration graph with r2% 99.18 and RSD% did
... Show MoreThe research included five sections containing the first section on the introduction of the research and its importance and was addressed to the importance of the game of gymnastic and skilled parallel effectiveness and the importance of learning, but the problem of research that there is a difference in learning this skill and difficulty in learning may be one of the most important reasons are fear and fear of falling and injury, And a lack of sense of the movement of the movement is one of the obstacles in the completion of the skill and the goal of research to design a device that helps in learning the skill of descending Almtor facing with half a cycle according to the typical locomotor track on the parallel device of the technical men'
... Show MoreAmong the metaheuristic algorithms, population-based algorithms are an explorative search algorithm superior to the local search algorithm in terms of exploring the search space to find globally optimal solutions. However, the primary downside of such algorithms is their low exploitative capability, which prevents the expansion of the search space neighborhood for more optimal solutions. The firefly algorithm (FA) is a population-based algorithm that has been widely used in clustering problems. However, FA is limited in terms of its premature convergence when no neighborhood search strategies are employed to improve the quality of clustering solutions in the neighborhood region and exploring the global regions in the search space. On the
... Show MoreHuman Interactive Proofs (HIPs) are automatic inverse Turing tests, which are intended to differentiate between people and malicious computer programs. The mission of making good HIP system is a challenging issue, since the resultant HIP must be secure against attacks and in the same time it must be practical for humans. Text-based HIPs is one of the most popular HIPs types. It exploits the capability of humans to recite text images more than Optical Character Recognition (OCR), but the current text-based HIPs are not well-matched with rapid development of computer vision techniques, since they are either vey simply passed or very hard to resolve, thus this motivate that
... Show MoreIn this paper three techniques for image compression are implemented. The proposed techniques consist of three dimension (3-D) two level discrete wavelet transform (DWT), 3-D two level discrete multi-wavelet transform (DMWT) and 3-D two level hybrid (wavelet-multiwavelet transform) technique. Daubechies and Haar are used in discrete wavelet transform and Critically Sampled preprocessing is used in discrete multi-wavelet transform. The aim is to maintain to increase the compression ratio (CR) with respect to increase the level of the transformation in case of 3-D transformation, so, the compression ratio is measured for each level. To get a good compression, the image data properties, were measured, such as, image entropy (He), percent r
... Show More