Steganography involves concealing information by embedding data within cover media and it can be categorized into two main domains: spatial and frequency. This paper presents two distinct methods. The first is operating in the spatial domain which utilizes the least significant bits (LSBs) to conceal a secret message. The second method is the functioning in the frequency domain which hides the secret message within the LSBs of the middle-frequency band of the discrete cosine transform (DCT) coefficients. These methods enhance obfuscation by utilizing two layers of randomness: random pixel embedding and random bit embedding within each pixel. Unlike other available methods that embed data in sequential order with a fixed amount. These methods embed the data in a random location with a random amount, further enhancing the level of obfuscation. A pseudo-random binary key that is generated through a nonlinear combination of eight Linear Feedback Shift Registers (LFSRs) controls this randomness. The experimentation involves various 512x512 cover images. The first method achieves an average PSNR of 43.5292 with a payload capacity of up to 16% of the cover image. In contrast, the second method yields an average PSNR of 38.4092 with a payload capacity of up to 8%. The performance analysis demonstrates that the LSB-based method can conceal more data with less visibility, however, it is vulnerable to simple image manipulation. On the other hand, the DCT-based method offers lower capacity with increased visibility, but it is more robust.
Image classification is the process of finding common features in images from various classes and applying them to categorize and label them. The main problem of the image classification process is the abundance of images, the high complexity of the data, and the shortage of labeled data, presenting the key obstacles in image classification. The cornerstone of image classification is evaluating the convolutional features retrieved from deep learning models and training them with machine learning classifiers. This study proposes a new approach of “hybrid learning” by combining deep learning with machine learning for image classification based on convolutional feature extraction using the VGG-16 deep learning model and seven class
... Show MoreAbstract
Metal cutting processes still represent the largest class of manufacturing operations. Turning is the most commonly employed material removal process. This research focuses on analysis of the thermal field of the oblique machining process. Finite element method (FEM) software DEFORM 3D V10.2 was used together with experimental work carried out using infrared image equipment, which include both hardware and software simulations. The thermal experiments are conducted with AA6063-T6, using different tool obliquity, cutting speeds and feed rates. The results show that the temperature relatively decreased when tool obliquity increases at different cutting speeds and feed rates, also it
... Show MoreThe issue of image captioning, which comprises automatic text generation to understand an image’s visual information, has become feasible with the developments in object recognition and image classification. Deep learning has received much interest from the scientific community and can be very useful in real-world applications. The proposed image captioning approach involves the use of Convolution Neural Network (CNN) pre-trained models combined with Long Short Term Memory (LSTM) to generate image captions. The process includes two stages. The first stage entails training the CNN-LSTM models using baseline hyper-parameters and the second stage encompasses training CNN-LSTM models by optimizing and adjusting the hyper-parameters of
... Show MoreDerivative spectrophotometry is one of the analytical chemistry techniques used
in the analysis and determination of chemicals and pharmaceuticals. This method is
characterized by simplicity, sensitivity and speed. Derivative of Spectra conducted
in several ways, including optical, electronic and mathematical. This operation
usually be done within spectrophotometer. The paper is based on form of a new
program. The program construction is written in Visual Basic language within
Microsoft Excel. The program is able to transform the first, second, third and fourth
derivatives of data and the return of these derivatives to zero order (normal plot).
The program was applied on experimental (trial) and reals values of su
Elliptic Curve Cryptography (ECC) is one of the public key cryptosystems that works based on the algebraic models in the form of elliptic curves. Usually, in ECC to implement the encryption, the encoding of data must be carried out on the elliptic curve, which seems to be a preprocessing step. Similarly, after the decryption a post processing step must be conducted for mapping or decoding the corresponding data to the exact point on the elliptic curves. The Memory Mapping (MM) and Koblitz Encoding (KE) are the commonly used encoding models. But both encoding models have drawbacks as the MM needs more memory for processing and the KE needs more computational resources. To overcome these issues the proposed enhanced Koblitz encodi
... Show MorePower-electronic converters are essential elements for the effective interconnection of renewable energy sources to the power grid, as well as to include energy storage units, vehicle charging stations, microgrids, etc. Converter models that provide an accurate representation of their wideband operation and interconnection with other active and passive grid components and systems are necessary for reliable steady state and transient analyses during normal or abnormal grid operating conditions. This paper introduces two Laplace domain-based approaches to model buck and boost DC-DC converters for electromagnetic transient studies. The first approach is an analytical one, where the converter is represented by a two-port admittance model via mo
... Show MoreThe current study aimed to measure the attitudes of female teachers towards the use of digital learning and the degree of possessing their digital education skills. The study sample consisted of (180) workers with disabilities (mental disability، auditory impairment، visual disability، hyperactivity and distraction. To achieve the goals of the study, the transformation measure was used towards digital education for people with disabilities. The study reached the following results: the availability of digital learning skills among workers with disabilities. The study concluded with a series of recommendations including holding Training courses to keep up with the challenges of educational trends and modern technology in this area.