Data steganography is a technique used to hide data, secret message, within another data, cover carrier. It is considered as a part of information security. Audio steganography is a type of data steganography, where the secret message is hidden in audio carrier. This paper proposes an efficient audio steganography method that uses LSB technique. The proposed method enhances steganography performance by exploiting all carrier samples and balancing between hiding capacity and distortion ratio. It suggests an adaptive number of hiding bits for each audio sample depending on the secret message size, the cover carrier size, and the signal to noise ratio (SNR). Comparison results show that the proposed method outperforms state of the art methods in terms of average segmental SNR, number of failing samples, and Czekanowski Distance (CZD). In addition, the proposed method shows the ability to operate with large message sizes (up to half of carrier size) with graceful degradation as opposed to the other methods which fail at large message size. So, the proposed method provides more flexibility in message and carrier sizes while preserving high efficiency.
Features is the description of the image contents which could be corner, blob or edge. Corners are one of the most important feature to describe image, therefore there are many algorithms to detect corners such as Harris, FAST, SUSAN, etc. Harris is a method for corner detection and it is an efficient and accurate feature detection method. Harris corner detection is rotation invariant but it isn’t scale invariant. This paper presents an efficient harris corner detector invariant to scale, this improvement done by using gaussian function with different scales. The experimental results illustrate that it is very useful to use Gaussian linear equation to deal with harris weakness.
In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.
In this paper, a fast lossless image compression method is introduced for compressing medical images, it is based on splitting the image blocks according to its nature along with using the polynomial approximation to decompose image signal followed by applying run length coding on the residue part of the image, which represents the error caused by applying polynomial approximation. Then, Huffman coding is applied as a last stage to encode the polynomial coefficients and run length coding. The test results indicate that the suggested method can lead to promising performance.
In the present study, the effect of new cross-section fin geometries on overall thermal/fluid performance had been investigated. The cross-section included the base original geometry of (triangular, square, circular, and elliptical pin fins) by adding exterior extra fins along the sides of the origin fins. The present extra fins include rectangular extra fin of 2 mm (height) and 4 mm (width) and triangular extra fin of 2 mm (base) 4 mm (height). The use of entropy generation minimization method (EGM) allows the combined effect of thermal resistance and pressure drop to be assessed through the simultaneous interaction with the heat sink. A general dimensionless expression for the entropy generation rate is obtained by con
... Show MoreThe fingerprints are the more utilized biometric feature for person identification and verification. The fingerprint is easy to understand compare to another existing biometric type such as voice, face. It is capable to create a very high recognition rate for human recognition. In this paper the geometric rotation transform is applied on fingerprint image to obtain a new level of features to represent the finger characteristics and to use for personal identification; the local features are used for their ability to reflect the statistical behavior of fingerprint variation at fingerprint image. The proposed fingerprint system contains three main stages, they are: (i) preprocessing, (ii) feature extraction, and (iii) matching. The preprocessi
... Show MoreIn this paper, a handwritten digit classification system is proposed based on the Discrete Wavelet Transform and Spike Neural Network. The system consists of three stages. The first stage is for preprocessing the data and the second stage is for feature extraction, which is based on Discrete Wavelet Transform (DWT). The third stage is for classification and is based on a Spiking Neural Network (SNN). To evaluate the system, two standard databases are used: the MADBase database and the MNIST database. The proposed system achieved a high classification accuracy rate with 99.1% for the MADBase database and 99.9% for the MNIST database
Background: Obesity tends to appear in modern societies and constitutes a significant public health problem with an increased risk of cardiovascular diseases.
Objective: This study aims to determine the agreement between actual and perceived body image in the general population.
Methods: A descriptive cross-sectional study design was conducted with a sample size of 300. The data were collected from eight major populated areas of Northern district of Karachi Sindh with a period of six months (10th January 2020 to 21st June 2020). The Figure rating questionnaire scale (FRS) was applied to collect the demographic data and perception about body weight. Body mass index (BMI) used for ass
... Show MoreThis work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it
... Show More