A new algorithm is proposed to compress speech signals using wavelet transform and linear predictive coding. Signal compression based on the concept of selecting a small number of approximation coefficients after they are compressed by the wavelet decomposition (Haar and db4) at a suitable chosen level and ignored details coefficients, and then approximation coefficients are windowed by a rectangular window and fed to the linear predictor. Levinson Durbin algorithm is used to compute LP coefficients, reflection coefficients and predictor error. The compress files contain LP coefficients and previous sample. These files are very small in size compared to the size of the original signals. Compression ratio is calculated from the size of the compressed signal relative to the size of the uncompressed signal. The proposed algorithms where fulfilled with the use of Matlab package
This work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show MoreCare and attention to the structure in the sixties of the last century replaced the mark, and if the structure of Ms. pampered in research and studies, it has become the mark is also a spoiled lady .. But the relationship between the structure and the mark was not a break and break, but the relationship of integration, His themes are structural analysis, and these are intellectual themes that can not be surpassed in contemporary research, especially since semiotics have emerged from the linguistic inflection.
We have tried to distinguish between text and speech, which is a daunting task, as it seems that whenever the difference between them is clear and clear, we come back to wonder whether the text is the same discourse, and is
... Show MoreMany production companies suffers from big losses because of high production cost and low profits for several reasons, including raw materials high prices and no taxes impose on imported goods also consumer protection law deactivation and national product and customs law, so most of consumers buy imported goods because it is characterized by modern specifications and low prices.
The production company also suffers from uncertainty in the cost, volume of production, sales, and availability of raw materials and workers number because they vary according to the seasons of the year.
I had adopted in this research fuzzy linear program model with fuzzy figures
... Show MoreThis paper is dealing with non-polynomial spline functions "generalized spline" to find the approximate solution of linear Volterra integro-differential equations of the second kind and extension of this work to solve system of linear Volterra integro-differential equations. The performance of generalized spline functions are illustrated in test examples
The aerodynamic characteristics of general three-dimensional rectangular wings are considered using non-linear interaction between two-dimensional viscous-inviscid panel method and vortex ring method. The potential flow of a two-dimensional airfoil by the pioneering Hess & Smith method was used with viscous laminar, transition and turbulent boundary layer to solve flow about complex configuration of airfoils including stalling effect. Viterna method was used to extend the aerodynamic characteristics of the specified airfoil to high angles of attacks. A modified vortex ring method was used to find the circulation values along span wise direction of the wing and then interacted with sectional circulation obtained by Kutta-Joukowsky theorem of
... Show MoreIn this paper, a handwritten digit classification system is proposed based on the Discrete Wavelet Transform and Spike Neural Network. The system consists of three stages. The first stage is for preprocessing the data and the second stage is for feature extraction, which is based on Discrete Wavelet Transform (DWT). The third stage is for classification and is based on a Spiking Neural Network (SNN). To evaluate the system, two standard databases are used: the MADBase database and the MNIST database. The proposed system achieved a high classification accuracy rate with 99.1% for the MADBase database and 99.9% for the MNIST database
In this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.
In this paper, we present multiple bit error correction coding scheme based on extended Hamming product code combined with type II HARQ using shared resources for on chip interconnect. The shared resources reduce the hardware complexity of the encoder and decoder compared to the existing three stages iterative decoding method for on chip interconnects. The proposed method of decoding achieves 20% and 28% reduction in area and power consumption respectively, with only small increase in decoder delay compared to the existing three stage iterative decoding scheme for multiple bit error correction. The proposed code also achieves excellent improvement in residual flit error rate and up to 58% of total power consumption compared to the other err
... Show MoreAbstract
The Phenomenon of Extremism of Values (Maximum or Rare Value) an important phenomenon is the use of two techniques of sampling techniques to deal with this Extremism: the technique of the peak sample and the maximum annual sampling technique (AM) (Extreme values, Gumbel) for sample (AM) and (general Pareto, exponential) distribution of the POT sample. The cross-entropy algorithm was applied in two of its methods to the first estimate using the statistical order and the second using the statistical order and likelihood ratio. The third method is proposed by the researcher. The MSE comparison coefficient of the estimated parameters and the probability density function for each of the distributions were
... Show MoreAbstract
The catalytic cracking conversion of Iraqi vacuum gas oil was studied on large and medium pore size (HY, HX, ZSM-22 and ZSM-11) of zeolite catalysts. These catalysts were prepared locally and used in the present work. The catalytic conversion performed on a continuous fixed-bed laboratory reaction unit. Experiments were performed in the temperature range of 673 to 823K, pressure range of 3 to 15bar, and LHSV range of 0.5-3h-1. The results show that the catalytic conversion of vacuum gas oil increases with increase in reaction temperature and decreases with increase in LHSV. The catalytic activity for the proposed catalysts arranged in the following order:
HY>H
... Show More