In this paper, method of steganography in Audio is introduced for hiding secret data in audio media file (WAV). Hiding in audio becomes a challenging discipline, since the Human Auditory System is extremely sensitive. The proposed method is to embed the secret text message in frequency domain of audio file. The proposed method contained two stages: the first embedding phase and the second extraction phase. In embedding phase the audio file transformed from time domain to frequency domain using 1-level linear wavelet decomposition technique and only high frequency is used for hiding secreted message. The text message encrypted using Data Encryption Standard (DES) algorithm. Finally; the Least Significant bit (LSB) algorithm used to hide secret message in high frequency. The proposed approach tested in different sizes of audio file and showed the success of hiding according to (PSNR) equation.
During the two last decades ago, audio compression becomes the topic of many types of research due to the importance of this field which reflecting on the storage capacity and the transmission requirement. The rapid development of the computer industry increases the demand for audio data with high quality and accordingly, there is great importance for the development of audio compression technologies, lossy and lossless are the two categories of compression. This paper aims to review the techniques of the lossy audio compression methods, summarize the importance and the uses of each method.
Attacking a transferred data over a network is frequently happened millions time a day. To address this problem, a secure scheme is proposed which is securing a transferred data over a network. The proposed scheme uses two techniques to guarantee a secure transferring for a message. The message is encrypted as a first step, and then it is hided in a video cover. The proposed encrypting technique is RC4 stream cipher algorithm in order to increase the message's confidentiality, as well as improving the least significant bit embedding algorithm (LSB) by adding an additional layer of security. The improvement of the LSB method comes by replacing the adopted sequential selection by a random selection manner of the frames and the pixels wit
... Show MoreIn der Nachwendezeit erschienen im deutschen Buchmarkt zahlreiche Werke deutscher Autoren, die einen Trend oder Tendenz markieren können. Im Mittelpunkt der vorliegenden Arbeit steht die Analyse der Struktur und des Inhalts von drei erst nach der Wende erschienen Briefromanen: Alles, alles Liebe (2000) von Barbara Honigmann (geb. 1949), Die Liebenden (2002) von Gerhard Henschel (Jahrgang 1962) und Neue Leben (2005) des im Jahre 1962 geborenen Ingo Schulze. Dieser Beitrag macht sich zur Aufgabe zu erklären, ob der Briefroman ein neuer Trend in der deutschen Literatur nach der Wende ist.
Abstract
In the after( Wende) period a large collection of literary works for German writers has been p
... Show MoreAbstract The wavelet shrink estimator is an attractive technique when estimating the nonparametric regression functions, but it is very sensitive in the case of a correlation in errors. In this research, a polynomial model of low degree was used for the purpose of addressing the boundary problem in the wavelet reduction in addition to using flexible threshold values in the case of Correlation in errors as it deals with those transactions at each level separately, unlike the comprehensive threshold values that deal with all levels simultaneously, as (Visushrink) methods, (False Discovery Rate) method, (Improvement Thresholding) and (Sureshrink method), as the study was conducted on real monthly data represented in the rates of theft crimes f
... Show MoreNeighShrink is an efficient image denoising algorithm based on the discrete wavelet
transform (DWT). Its disadvantage is to use a suboptimal universal threshold and identical
neighbouring window size in all wavelet subbands. Dengwen and Wengang proposed an
improved method, which can determine an optimal threshold and neighbouring window size
for every subband by the Stein’s unbiased risk estimate (SURE). Its denoising performance is
considerably superior to NeighShrink and also outperforms SURE-LET, which is an up-todate
denoising algorithm based on the SURE. In this paper different wavelet transform
families are used with this improved method, the results show that Haar wavelet has the
lowest performance among
This paper presents the application of a framework of fast and efficient compressive sampling based on the concept of random sampling of sparse Audio signal. It provides four important features. (i) It is universal with a variety of sparse signals. (ii) The number of measurements required for exact reconstruction is nearly optimal and much less then the sampling frequency and below the Nyquist frequency. (iii) It has very low complexity and fast computation. (iv) It is developed on the provable mathematical model from which we are able to quantify trade-offs among streaming capability, computation/memory requirement and quality of reconstruction of the audio signal. Compressed sensing CS is an attractive compression scheme due to its uni
... Show MoreFingerprint recognition is one among oldest procedures of identification. An important step in automatic fingerprint matching is to mechanically and dependably extract features. The quality of the input fingerprint image has a major impact on the performance of a feature extraction algorithm. The target of this paper is to present a fingerprint recognition technique that utilizes local features for fingerprint representation and matching. The adopted local features have determined: (i) the energy of Haar wavelet subbands, (ii) the normalized of Haar wavelet subbands. Experiments have been made on three completely different sets of features which are used when partitioning the fingerprint into overlapped blocks. Experiments are conducted on
... Show MoreThe Dirichlet process is an important fundamental object in nonparametric Bayesian modelling, applied to a wide range of problems in machine learning, statistics, and bioinformatics, among other fields. This flexible stochastic process models rich data structures with unknown or evolving number of clusters. It is a valuable tool for encoding the true complexity of real-world data in computer models. Our results show that the Dirichlet process improves, both in distribution density and in signal-to-noise ratio, with larger sample size; achieves slow decay rate to its base distribution; has improved convergence and stability; and thrives with a Gaussian base distribution, which is much better than the Gamma distribution. The performance depen
... Show More