Methods of speech recognition have been the subject of several studies over the past decade. Speech recognition has been one of the most exciting areas of the signal processing. Mixed transform is a useful tool for speech signal processing; it is developed for its abilities of improvement in feature extraction. Speech recognition includes three important stages, preprocessing, feature extraction, and classification. Recognition accuracy is so affected by the features extraction stage; therefore different models of mixed transform for feature extraction were proposed. The properties of the recorded isolated word will be 1-D, which achieve the conversion of each 1-D word into a 2-D form. The second step of the word recognizer requires, the application of 2-D FFT, Radon transform, the 1-D IFFT,and 1-D discrete wavelet transforms were used in the first proposed model, while discrete multicircularlet transform was used in the second proposed model. The final stage of the proposed models includes the use of the dynamic time warping algorithm for recognition tasks. The performance of the proposed systems was evaluated using forty different isolated Arabic words that are recorded fifteen times in a studio for speaker dependant. The result shows recognition accuracy of (91% and 89%) using discrete wavelet transform type Daubechies (Db1) and (Db4) respectively, and the accuracy score between (87%-93%) was achieved using
discrete multicircularlet transform for 9 sub bands.
Automatic recognition of individuals is very important in modern eras. Biometric techniques have emerged as an answer to the matter of automatic individual recognition. This paper tends to give a technique to detect pupil which is a mixture of easy morphological operations and Hough Transform (HT) is presented in this paper. The circular area of the eye and pupil is divided by the morphological filter as well as the Hough Transform (HT) where the local Iris area has been converted into a rectangular block for the purpose of calculating inconsistencies in the image. This method is implemented and tested on the Chinese Academy of Sciences (CASIA V4) iris image database 249 person and the IIT Delhi (IITD) iris
... Show MoreHome New Trends in Information and Communications Technology Applications Conference paper Audio Compression Using Transform Coding with LZW and Double Shift Coding Zainab J. Ahmed & Loay E. George Conference paper First Online: 11 January 2022 126 Accesses Part of the Communications in Computer and Information Science book series (CCIS,volume 1511) Abstract The need for audio compression is still a vital issue, because of its significance in reducing the data size of one of the most common digital media that is exchanged between distant parties. In this paper, the efficiencies of two audio compression modules were investigated; the first module is based on discrete cosine transform and the second module is based on discrete wavelet tr
... Show MoreThis paper suggest two method of recognition, these methods depend on the extraction of the feature of the principle component analysis when applied on the wavelet domain(multi-wavelet). First method, an idea of increasing the space of recognition, through calculating the eigenstructure of the diagonal sub-image details at five depths of wavelet transform is introduced. The effective eigen range selected here represent the base for image recognition. In second method, an idea of obtaining invariant wavelet space at all projections is presented. A new recursive from that represents invariant space of representing any image resolutions obtained from wavelet transform is adopted. In this way, all the major problems that effect the image and
... Show MoreRecently, biometric technologies are used widely due to their improved security that decreases cases of deception and theft. The biometric technologies use physical features and characters in the identification of individuals. The most common biometric technologies are: Iris, voice, fingerprint, handwriting and hand print. In this paper, two biometric recognition technologies are analyzed and compared, which are the iris and sound recognition techniques. The iris recognition technique recognizes persons by analyzing the main patterns in the iris structure, while the sound recognition technique identifies individuals depending on their unique voice characteristics or as called voice print. The comparison results show that the resul
... Show MoreThis paper is concerned with introducing and studying the M-space by using the mixed degree systems which are the core concept in this paper. The necessary and sufficient condition for the equivalence of two reflexive M-spaces is super imposed. In addition, the m-derived graphs, m-open graphs, m-closed graphs, m-interior operators, m-closure operators and M-subspace are introduced. From an M-space, a unique supratopological space is introduced. Furthermore, the m-continuous (m-open and m-closed) functions are defined and the fundamental theorem of the m-continuity is provided. Finally, the m-homeomorphism is defined and some of its properties are investigated.
AA Abbass, HL Hussein, WA Shukur, J Kaabi, R Tornai, Webology, 2022 Individual’s eye recognition is an important issue in applications such as security systems, credit card control and guilty identification. Using video images cause to destroy the limitation of fixed images and to be able to receive users’ image under any condition as well as doing the eye recognition. There are some challenges in these systems; changes of individual gestures, changes of light, face coverage, low quality of video images and changes of personal characteristics in each frame. There is a need for two phases in order to do the eye recognition using images; revelation and eye recognition which will use in the security systems to identify the persons. The mai
... Show MoreIts well known that understanding human facial expressions is a key component in understanding emotions and finds broad applications in the field of human-computer interaction (HCI), has been a long-standing issue. In this paper, we shed light on the utilisation of a deep convolutional neural network (DCNN) for facial emotion recognition from videos using the TensorFlow machine-learning library from Google. This work was applied to ten emotions from the Amsterdam Dynamic Facial Expression Set-Bath Intensity Variations (ADFES-BIV) dataset and tested using two datasets.