Data steganography is a technique used to hide data, secret message, within another data, cover carrier. It is considered as a part of information security. Audio steganography is a type of data steganography, where the secret message is hidden in audio carrier. This paper proposes an efficient audio steganography method that uses LSB technique. The proposed method enhances steganography performance by exploiting all carrier samples and balancing between hiding capacity and distortion ratio. It suggests an adaptive number of hiding bits for each audio sample depending on the secret message size, the cover carrier size, and the signal to noise ratio (SNR). Comparison results show that the proposed method outperforms state of the art methods in terms of average segmental SNR, number of failing samples, and Czekanowski Distance (CZD). In addition, the proposed method shows the ability to operate with large message sizes (up to half of carrier size) with graceful degradation as opposed to the other methods which fail at large message size. So, the proposed method provides more flexibility in message and carrier sizes while preserving high efficiency.
The study included a statement toxicity of some heavy metals individually and collectively and the existence of plant nutrients in the center Agirenk bluish green moss growth and Askhaddm biomass as an indicator of the study, in addition to portability moss on the accumulation of the metal
Products’ quality inspection is an important stage in every production route, in which the quality of the produced goods is estimated and compared with the desired specifications. With traditional inspection, the process rely on manual methods that generates various costs and large time consumption. On the contrary, today’s inspection systems that use modern techniques like computer vision, are more accurate and efficient. However, the amount of work needed to build a computer vision system based on classic techniques is relatively large, due to the issue of manually selecting and extracting features from digital images, which also produces labor costs for the system engineers.
 
... Show MoreImproving students’ use of argumentation is front and center in the increasing emphasis on scientific practice in K-12 Science and STEM programs. We explore the construct validity of scenario-based assessments of claim-evidence-reasoning (CER) and the structure of the CER construct with respect to a learning progression framework. We also seek to understand how middle school students progress. Establishing the purpose of an argument is a competency that a majority of middle school students meet, whereas quantitative reasoning is the most difficult, and the Rasch model indicates that the competencies form a unidimensional hierarchy of skills. We also find no evidence of differential item functioning between different scenarios, suggesting
... Show MoreThe aim of human lower limb rehabilitation robot is to regain the ability of motion and to strengthen the weak muscles. This paper proposes the design of a force-position control for a four Degree Of Freedom (4-DOF) lower limb wearable rehabilitation robot. This robot consists of a hip, knee and ankle joints to enable the patient for motion and turn in both directions. The joints are actuated by Pneumatic Muscles Actuators (PMAs). The PMAs have very great potential in medical applications because the similarity to biological muscles. Force-Position control incorporating a Takagi-Sugeno-Kang- three- Proportional-Derivative like Fuzzy Logic (TSK-3-PD) Controllers for position control and three-Proportional (3-P) controllers for force contr
... Show MoreThis paper presents a new algorithm in an important research field which is the semantic word similarity estimation. A new feature-based algorithm is proposed for measuring the word semantic similarity for the Arabic language. It is a highly systematic language where its words exhibit elegant and rigorous logic. The score of sematic similarity between two Arabic words is calculated as a function of their common and total taxonomical features. An Arabic knowledge source is employed for extracting the taxonomical features as a set of all concepts that subsumed the concepts containing the compared words. The previously developed Arabic word benchmark datasets are used for optimizing and evaluating the proposed algorithm. In this paper,
... Show MoreElectromyogram (EMG)-based Pattern Recognition (PR) systems for upper-limb prosthesis control provide promising ways to enable an intuitive control of the prostheses with multiple degrees of freedom and fast reaction times. However, the lack of robustness of the PR systems may limit their usability. In this paper, a novel adaptive time windowing framework is proposed to enhance the performance of the PR systems by focusing on their windowing and classification steps. The proposed framework estimates the output probabilities of each class and outputs a movement only if a decision with a probability above a certain threshold is achieved. Otherwise (i.e., all probability values are below the threshold), the window size of the EMG signa
... Show MoreOptical fiber chemical sensor based surface Plasmon resonance for sensing and measuring the refractive index and concentration for Acetic acid is designed and implemented during this work. Optical grade plastic optical fibers with a diameter of 1000μm were used with a diameter core of 980μm and a cladding of 20μm, where the sensor is fabricated by a small part (10mm) of optical fiber in the middle is embedded in a resin block and then the polishing process is done, after that it is deposited with about (40nm) thickness of gold metal and the Acetic acid is placed on the sensing probe.
Products’ quality inspection is an important stage in every production route, in which the quality of the produced goods is estimated and compared with the desired specifications. With traditional inspection, the process rely on manual methods that generates various costs and large time consumption. On the contrary, today’s inspection systems that use modern techniques like computer vision, are more accurate and efficient. However, the amount of work needed to build a computer vision system based on classic techniques is relatively large, due to the issue of manually selecting and extracting features from digital images, which also produces labor costs for the system engineers. In this research, we pr
... Show More