The disposal of the waste material is the main goal of this investigation by transformation to high-fineness powder and producing self-consolidation concrete (SCC) with less cost and more eco-friendly by reducing the cement weight, taking into consideration the fresh and strength properties. The reference mix design was prepared by adopting the European guide. Five waste materials (clay brick, ceramic, granite tiles, marble tiles, and thermostone blocks) were converted to high-fine particle size distribution and then used as 5, 10, and 15% weight replacements of cement. The improvement in strength properties is more significant when using clay bricks compared to other activated waste ceramics and granite tiles. The percentage increases to 11.59% at 28 days for compressive strength when using 10% replacement of cement weight. The ability to produce eco-SCC with less cement content and lower cost consumption is encouraged, although the enhancement in strength is not high since the waste can be disposable. While the percentage reduction in the strength of SCC mixes containing marble tile or thermostone block powder increases with the replacement of cement weight with a greater need for superplasticizer justification, we recommend using 5% as a replacement by weight of cement with an insignificant retardation of strength. Finally, there is a good relationship between compressive strength and ultrasonic pulse velocity and between tensile and flexural strength with a high
A LiF (TLD-700) PTFED disc has adiameter of (13mm) and thickness of (0.4mm) for study the response and sensetivity of this material for gamma and beta rays by using (TOLEDO) system from pitman company. In order to calibrate the system and studying the calibration factor. Discs were irradiated for Gamma and Beta rays and comparing with the theoretical doses. The exposure range is between 15×10-2 mGy to 1000×10-2 mGy. These doses are within the range of normal radiation field for workers.
In Automatic Speech Recognition (ASR) the non-linear data projection provided by a one hidden layer Multilayer Perceptron (MLP), trained to recognize phonemes, and has previous experiments to provide feature enhancement substantially increased ASR performance, especially in noise. Previous attempts to apply an analogous approach to speaker identification have not succeeded in improving performance, except by combining MLP processed features with other features. We present test results for the TIMIT database which show that the advantage of MLP preprocessing for open set speaker identification increases with the number of speakers used to train the MLP and that improved identification is obtained as this number increases beyond sixty.
... Show MoreSteganography is a mean of hiding information within a more obvious form of
communication. It exploits the use of host data to hide a piece of information in such a way
that it is imperceptible to human observer. The major goals of effective Steganography are
High Embedding Capacity, Imperceptibility and Robustness. This paper introduces a scheme
for hiding secret images that could be as much as 25% of the host image data. The proposed
algorithm uses orthogonal discrete cosine transform for host image. A scaling factor (a) in
frequency domain controls the quality of the stego images. Experimented results of secret
image recovery after applying JPEG coding to the stego-images are included.
Data hiding is the process of encoding extra information in an image by making small modification to its pixels. To be practical, the hidden data must be perceptually invisible yet robust to common signal processing operations. This paper introduces a scheme for hiding a signature image that could be as much as 25% of the host image data and hence could be used both in digital watermarking as well as image/data hiding. The proposed algorithm uses orthogonal discrete wavelet transforms with two zero moments and with improved time localization called discrete slantlet transform for both host and signature image. A scaling factor ? in frequency domain control the quality of the watermarked images. Experimental results of signature image
... Show MoreIn this paper, a subspace identification method for bilinear systems is used . Wherein a " three-block " and " four-block " subspace algorithms are used. In this algorithms the input signal to the system does not have to be white . Simulation of these algorithms shows that the " four-block " gives fast convergence and the dimensions of the matrices involved are significantly smaller so that the computational complexity is lower as a comparison with " three-block " algorithm .
The objective of this work is to design and implement a cryptography system that enables the sender to send message through any channel (even if this channel is insecure) and the receiver to decrypt the received message without allowing any intruder to break the system and extracting the secret information. In this work, we implement an interaction between the feedforward neural network and the stream cipher, so the secret message will be encrypted by unsupervised neural network method in addition to the first encryption process which is performed by the stream cipher method. The security of any cipher system depends on the security of the related keys (that are used by the encryption and the decryption processes) and their corresponding le
... Show Morein this paper we adopted ways for detecting edges locally classical prewitt operators and modification it are adopted to perform the edge detection and comparing then with sobel opreators the study shows that using a prewitt opreators
In this paper we investigate the automatic recognition of emotion in text. We propose a new method for emotion recognition based on the PPM (PPM is short for Prediction by Partial Matching) character-based text compression scheme in order to recognize Ekman’s six basic emotions (Anger, Disgust, Fear, Happiness, Sadness, Surprise). Experimental results with three datasets show that the new method is very effective when compared with traditional word-based text classification methods. We have also found that our method works best if the sizes of text in all classes used for training are similar, and that performance significantly improves with increased data.