Speech is the essential way to interact between humans or between human and machine. However, it is always contaminated with different types of environment noise. Therefore, speech enhancement algorithms (SEA) have appeared as a significant approach in speech processing filed to suppress background noise and return back the original speech signal. In this paper, a new efficient two-stage SEA with low distortion is proposed based on minimum mean square error sense. The estimation of clean signal is performed by taking the advantages of Laplacian speech and noise modeling based on orthogonal transform (Discrete Krawtchouk-Tchebichef transform) coefficients distribution. The Discrete Krawtchouk-Tchebichef transform (DKTT) has a high energy compaction and provides a high matching between Laplacian density and its coefficients distribution that affects positively on reducing residual noise without sacrificing speech components. Moreover, a cascade combination of hybrid speech estimator is proposed by using two stages filters (non-linear and linear) based on DKTT domain to lessen the residual noise effectively without distorting the speech signal. The linear estimator is considered as a post processing filter that reinforces the suppression of noise by regenerate speech components. To this end, the output results have been compared with existing work in terms of different quality and intelligibility measures. The comparative evaluation confirms the superior achievements of the proposed SEA in various noisy environments. The improvement ratio of the presented algorithm in terms of PESQ measure are 5.8% and 1.8% for white and babble noise environments, respectively. In addition, the improvement ratio of the presented algorithm in terms of OVL measure are 15.7% and 9.8% for white and babble noise environments, respectively.
Non uniform channelization is a crucial task in cognitive radio receivers for obtaining separate channels from the digitized wideband input signal at different intervals of time. The two main requirements in the channelizer are reconfigurability and low complexity. In this paper, a reconfigurable architecture based on a combination of Improved Coefficient Decimation Method (ICDM) and Coefficient Interpolation Method (CIM) is proposed. The proposed Hybrid Coefficient Decimation-Interpolation Method (HCDIM) based filter bank (FB) is able to realize the same number of channels realized using (ICDM) but with a maximum decimation factor divided by the interpolation factor (L), which leads to less deterioration in stop band at
... Show MoreDisease diagnosis with computer-aided methods has been extensively studied and applied in diagnosing and monitoring of several chronic diseases. Early detection and risk assessment of breast diseases based on clinical data is helpful for doctors to make early diagnosis and monitor the disease progression. The purpose of this study is to exploit the Convolutional Neural Network (CNN) in discriminating breast MRI scans into pathological and healthy. In this study, a fully automated and efficient deep features extraction algorithm that exploits the spatial information obtained from both T2W-TSE and STIR MRI sequences to discriminate between pathological and healthy breast MRI scans. The breast MRI scans are preprocessed prior to the feature
... Show MoreOnline communication on social networks has become a never-given-up way of expressing and sharing views and opinions within the realm of all topics on earth, and that is that! A basis essential in this is the limits at which "freedom of expression" should not be trespassed so as not to fall into the expression of "hate speech". These two ends make a base in the UN regulations pertaining to human rights: One is free to express, but not to hate by expression. Hereunder, a Critical Discourse Analysis in terms of Fairclough's dialectical-relational approach (2001) is made of Facebook posts (being made by common people, and not of official nature) targeting Islam and Muslims. This is made so as to recognize these instances of "speech" a
... Show MoreImage retrieval is used in searching for images from images database. In this paper, content – based image retrieval (CBIR) using four feature extraction techniques has been achieved. The four techniques are colored histogram features technique, properties features technique, gray level co- occurrence matrix (GLCM) statistical features technique and hybrid technique. The features are extracted from the data base images and query (test) images in order to find the similarity measure. The similarity-based matching is very important in CBIR, so, three types of similarity measure are used, normalized Mahalanobis distance, Euclidean distance and Manhattan distance. A comparison between them has been implemented. From the results, it is conclud
... Show MoreThe concept of the active contour model has been extensively utilized in the segmentation and analysis of images. This technology has been effectively employed in identifying the contours in object recognition, computer graphics and vision, biomedical processing of images that is normal images or medical images such as Magnetic Resonance Images (MRI), X-rays, plus Ultrasound imaging. Three colleagues, Kass, Witkin and Terzopoulos developed this energy, lessening “Active Contour Models” (equally identified as Snake) back in 1987. Being curved in nature, snakes are characterized in an image field and are capable of being set in motion by external and internal forces within image data and the curve itself in that order. The present s
... Show More