<span>Digital audio is required to transmit large sizes of audio information through the most common communication systems; in turn this leads to more challenges in both storage and archieving. In this paper, an efficient audio compressive scheme is proposed, it depends on combined transform coding scheme; it is consist of i) bi-orthogonal (tab 9/7) wavelet transform to decompose the audio signal into low & multi high sub-bands, ii) then the produced sub-bands passed through DCT to de-correlate the signal, iii) the product of the combined transform stage is passed through progressive hierarchical quantization, then traditional run-length encoding (RLE), iv) and finally LZW coding to generate the output mate bitstream. The measures Peak signal-to-noise ratio (PSNR) and compression ratio (CR) were used to conduct a comparative analysis for the performance of the whole system. Many audio test samples were utilized to test the performance behavior; the used samples have various sizes and vary in features. The simulation results appear the efficiency of these combined transforms when using LZW within the domain of data compression. The compression results are encouraging and show a remarkable reduction in audio file size with good fidelity.</span>
This research is concerned with the re-analysis of optical data (the imaginary part of the dielectric function as a function of photon energy E) of a-Si:H films prepared by Jackson et al. and Ferlauto et al. through using nonlinear regression fitting we estimated the optical energy gap and the deviation from the Tauc model by considering the parameter of energy photon-dependence of the momentum matrix element of the p as a free parameter by assuming that density of states distribution to be a square root function. It is observed for films prepared by Jackson et al. that the value of the parameter p for the photon energy range is is close to the value assumed by the Cody model and the optical gap energy is which is also close to the value
... Show MoreThis research aims to study the methods of reduction of dimensions that overcome the problem curse of dimensionality when traditional methods fail to provide a good estimation of the parameters So this problem must be dealt with directly . Two methods were used to solve the problem of high dimensional data, The first method is the non-classical method Slice inverse regression ( SIR ) method and the proposed weight standard Sir (WSIR) method and principal components (PCA) which is the general method used in reducing dimensions, (SIR ) and (PCA) is based on the work of linear combinations of a subset of the original explanatory variables, which may suffer from the problem of heterogeneity and the problem of linear
... Show MoreThe esterification of oleic acid with 2-ethylhexanol in presence of sulfuric acid as homogeneous catalyst was investigated in this work to produce 2-ethylhexyl oleate (biodiesel) by using semi batch reactive distillation. The effect of reaction temperature (100 to 130°C), 2-ethylhexanol:oleic acid molar ratio (1:1 to 1:3) and catalysts concentration (0.2 to 1wt%) were studied. Higher conversion of 97% was achieved with operating conditions of reaction temperature of 130°C, molar ratio of free fatty acid to alcohol of 1:2 and catalyst concentration of 1wt%. A simulation was adopted from basic principles of the reactive distillation using MATLAB to describe the process. Good agreement was achieved.
The integration of nanomaterials in asphalt modification has emerged as a promising approach to enhance the performance of asphalt pavements, particularly under high-temperature conditions. Nanomaterials, due to their unique properties such as high surface area, exceptional mechanical strength, and thermal stability, offer significant improvements in the rheological properties, durability, and resistance to deformation of asphalt binders. This research reviewed the application of various nanomaterials, including nano silica, nano alumina, nano titanium, nano zinc, and carbon nanotubes in asphalt modification. The incorporation of these nanomaterials into asphalt mixtures has shown potential to increase the stiffness and high-tempera
... Show MoreThis paper studies a novel technique based on the use of two effective methods like modified Laplace- variational method (MLVIM) and a new Variational method (MVIM)to solve PDEs with variable coefficients. The current modification for the (MLVIM) is based on coupling of the Variational method (VIM) and Laplace- method (LT). In our proposal there is no need to calculate Lagrange multiplier. We applied Laplace method to the problem .Furthermore, the nonlinear terms for this problem is solved using homotopy method (HPM). Some examples are taken to compare results between two methods and to verify the reliability of our present methods.
In the present paper, by making use of the new generalized operator, some results of third order differential subordination and differential superordination consequence for analytic functions are obtained. Also, some sandwich-type theorems are presented.
In this paper, our aim is to study variational formulation and solutions of 2-dimensional integrodifferential equations of fractional order. We will give a summery of representation to the variational formulation of linear nonhomogenous 2-dimensional Volterra integro-differential equations of the second kind with fractional order. An example will be discussed and solved by using the MathCAD software package when it is needed.
In this paper, first and second order sliding mode controllers are designed for a single link robotic arm actuated by two Pneumatic Artificial Muscles (PAMs). A new mathematical model for the arm has been developed based on the model of large scale pneumatic muscle actuator model. Uncertainty in parameters has been presented and tested for the two controllers. The simulation results of the second-order sliding mode controller proves to have a low tracking error and chattering effect as compared to the first order one. The verification has been done by using MATLAB and Simulink software.
The searching process using a binary codebook of combined Block Truncation Coding (BTC) method and Vector Quantization (VQ), i.e. a full codebook search for each input image vector to find the best matched code word in the codebook, requires a long time. Therefore, in this paper, after designing a small binary codebook, we adopted a new method by rotating each binary code word in this codebook into 900 to 2700 step 900 directions. Then, we systematized each code word depending on its angle to involve four types of binary code books (i.e. Pour when , Flat when , Vertical when, or Zigzag). The proposed scheme was used for decreasing the time of the coding procedure, with very small distortion per block, by designing s
... Show More