Compressing the speech reduces the data storage requirements, leading to reducing the time of transmitting the digitized speech over long-haul links like internet. To obtain best performance in speech compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry.The MCT bases functions are derived from GHM bases function using 2D linear convolution .The fast computation algorithm methods introduced here added desirable features to the current transform. We further assess the performance of the MCT in speech compression application. This paper discusses the effect of using DWT and MCT (one and two dimension) on speech compression. DWT and MCT performances in terms of compression ratio (CR), mean square error (MSE) and peak signal to noise ratio (PSNR) are assessed. Computer simulation results indicate that the two dimensions MCT offer a better compression ratio, MSE and PSNR than DWT.
Data-driven models perform poorly on part-of-speech tagging problems with the square Hmong language, a low-resource corpus. This paper designs a weight evaluation function to reduce the influence of unknown words. It proposes an improved harmony search algorithm utilizing the roulette and local evaluation strategies for handling the square Hmong part-of-speech tagging problem. The experiment shows that the average accuracy of the proposed model is 6%, 8% more than HMM and BiLSTM-CRF models, respectively. Meanwhile, the average F1 of the proposed model is also 6%, 3% more than HMM and BiLSTM-CRF models, respectively.
In this work, a novel technique to obtain an accurate solutions to nonlinear form by multi-step combination with Laplace-variational approach (MSLVIM) is introduced. Compared with the traditional approach for variational it overcome all difficulties and enable to provide us more an accurate solutions with extended of the convergence region as well as covering to larger intervals which providing us a continuous representation of approximate analytic solution and it give more better information of the solution over the whole time interval. This technique is more easier for obtaining the general Lagrange multiplier with reduces the time and calculations. It converges rapidly to exact formula with simply computable terms wit
... Show MoreThis study has been developed axes of the search, including: Search (deliberative) language and idiomatically, and Description Language (b social phenomenon), and the definition of the theory of (acts of speech), and discussed the problem of the conflict between tradition and innovation, as defined objectively have a target aimed at reviving the deliberative thought when Arab scholars , and the balance between the actual done Arab and Western rhetoric, but Meet in intellectual necessity, a sober reading that preserve the Arab language prestige, and its position in the light of the growing tongue Sciences, as long as we have inherited minds unique, and heritage huge able to consolidate the Arab theory lingual in linguistics.
This paper considers a new Double Integral transform called Double Sumudu-Elzaki transform DSET. The combining of the DSET with a semi-analytical method, namely the variational iteration method DSETVIM, to arrive numerical solution of nonlinear PDEs of Fractional Order derivatives. The proposed dual method property decreases the number of calculations required, so combining these two methods leads to calculating the solution's speed. The suggested technique is tested on four problems. The results demonstrated that solving these types of equations using the DSETVIM was more advantageous and efficient
Entropy define as uncertainty measure has been transfared by using the cumulative distribution function and reliability function for the Burr type – xii. In the case of data which suffer from volatility to build a model the probability distribution on every failure of a sample after achieving limitations function, probabilistic distribution. Has been derived formula probability distribution of the new transfer application entropy on the probability distribution of continuous Burr Type-XII and tested a new function and found that it achieved the conditions function probability, been derived mean and function probabilistic aggregate in order to be approved in the generation of data for the purpose of implementation of simulation
... Show MoreNews headlines are key elements in spreading news. They are unique texts written in a special language which enables readers understand the overall nature and importance of the topic. However, this special language causes difficulty for readers in understanding the headline. To illuminate this difficulty, it is argued that a pragmatic analysis from a speech act theory perspective is a plausible tool for a headline analysis. The main objective of the study is to pragmatically analyze the most frequently employed types of speech acts in the news headlines covering COVID-19 in Aljazeera English website. To this end, Bach and Harnish's (1979) Taxonomy of Speech Acts has been adopted to analyze the data. Thirty headlines have been collected f
... Show MoreIn this work, the fractional damped Burger's equation (FDBE) formula = 0,
Social media is known as detectors platform that are used to measure the activities of the users in the real world. However, the huge and unfiltered feed of messages posted on social media trigger social warnings, particularly when these messages contain hate speech towards specific individual or community. The negative effect of these messages on individuals or the society at large is of great concern to governments and non-governmental organizations. Word clouds provide a simple and efficient means of visually transferring the most common words from text documents. This research aims to develop a word cloud model based on hateful words on online social media environment such as Google News. Several steps are involved including data acq
... Show MoreImage compression is one of the data compression types applied to digital images in order to reduce their high cost for storage and/or transmission. Image compression algorithms may take the benefit of visual sensitivity and statistical properties of image data to deliver superior results in comparison with generic data compression schemes, which are used for other digital data. In the first approach, the input image is divided into blocks, each of which is 16 x 16, 32 x 32, or 64 x 64 pixels. The blocks are converted first into a string; then, encoded by using a lossless and dictionary-based algorithm known as arithmetic coding. The more occurrence of the pixels values is codded in few bits compare with pixel values of less occurre
... Show MoreRecognizing speech emotions is an important subject in pattern recognition. This work is about studying the effect of extracting the minimum possible number of features on the speech emotion recognition (SER) system. In this paper, three experiments performed to reach the best way that gives good accuracy. The first one extracting only three features: zero crossing rate (ZCR), mean, and standard deviation (SD) from emotional speech samples, the second one extracting only the first 12 Mel frequency cepstral coefficient (MFCC) features, and the last experiment applying feature fusion between the mentioned features. In all experiments, the features are classified using five types of classification techniques, which are the Random Forest (RF),
... Show More