Steganography involves concealing information by embedding data within cover media and it can be categorized into two main domains: spatial and frequency. This paper presents two distinct methods. The first is operating in the spatial domain which utilizes the least significant bits (LSBs) to conceal a secret message. The second method is the functioning in the frequency domain which hides the secret message within the LSBs of the middle-frequency band of the discrete cosine transform (DCT) coefficients. These methods enhance obfuscation by utilizing two layers of randomness: random pixel embedding and random bit embedding within each pixel. Unlike other available methods that embed data in sequential order with a fixed amount. These methods embed the data in a random location with a random amount, further enhancing the level of obfuscation. A pseudo-random binary key that is generated through a nonlinear combination of eight Linear Feedback Shift Registers (LFSRs) controls this randomness. The experimentation involves various 512x512 cover images. The first method achieves an average PSNR of 43.5292 with a payload capacity of up to 16% of the cover image. In contrast, the second method yields an average PSNR of 38.4092 with a payload capacity of up to 8%. The performance analysis demonstrates that the LSB-based method can conceal more data with less visibility, however, it is vulnerable to simple image manipulation. On the other hand, the DCT-based method offers lower capacity with increased visibility, but it is more robust.
The problem of the high peak to average ratio (PAPR) in OFDM signals is investigated with a brief presentation of the various methods used to reduce the PAPR with special attention to the clipping method. An alternative approach of clipping is presented, where the clipping is performed right after the IFFT stage unlike the conventional clipping that is performed in the power amplifier stage, which causes undesirable out of signal band spectral growth. In the proposed method, there is clipping of samples not clipping of wave, therefore, the spectral distortion is avoided. Coding is required to correct the errors introduced by the clipping and the overall system is tested for two types of modulations, the QPSK as a constant amplitude modul
... Show MoreA global pandemic has emerged as a result of the widespread coronavirus disease (COVID-19). Deep learning (DL) techniques are used to diagnose COVID-19 based on many chest X-ray. Due to the scarcity of available X-ray images, the performance of DL for COVID-19 detection is lagging, underdeveloped, and suffering from overfitting. Overfitting happens when a network trains a function with an incredibly high variance to represent the training data perfectly. Consequently, medical images lack the availability of large labeled datasets, and the annotation of medical images is expensive and time-consuming for experts. As the COVID-19 virus is an infectious disease, these datasets are scarce, and it is difficult to get large datasets
... Show MoreEstimation the unknown parameters of a two-dimensional sinusoidal signal model is an important and a difficult problem , The importance of this model in modeling Symmetric gray- scale texture image . In this paper, we propose employment Deferential Evaluation algorithm and the use of Sequential approach to estimate the unknown frequencies and amplitudes of the 2-D sinusoidal components when the signal is affected by noise. Numerical simulation are performed for different sample size, and various level of standard deviation to observe the performance of this method in estimate the parameters of 2-D sinusoidal signal model , This model was used for modeling the Symmetric gray scale texture image and estimating by using
... Show MoreOffline handwritten signature is a type of behavioral biometric-based on an image. Its problem is the accuracy of the verification because once an individual signs, he/she seldom signs the same signature. This is referred to as intra-user variability. This research aims to improve the recognition accuracy of the offline signature. The proposed method is presented by using both signature length normalization and histogram orientation gradient (HOG) for the reason of accuracy improving. In terms of verification, a deep-learning technique using a convolution neural network (CNN) is exploited for building the reference model for a future prediction. Experiments are conducted by utilizing 4,000 genuine as well as 2,000 skilled forged signatu
... Show MoreAutomated medical diagnosis is an important topic, especially in detection and classification of diseases. Malaria is one of the most widespread diseases, with more than 200 million cases, according to the 2016 WHO report. Malaria is usually diagnosed using thin and thick blood smears under a microscope. However, proper diagnosis is difficult, especially in poor countries where the disease is most widespread. Therefore, automatic diagnostics helps in identifying the disease through images of red blood cells, with the use of machine learning techniques and digital image processing. This paper presents an accurate model using a Deep Convolutional Neural Network build from scratch. The paper also proposed three CNN
... Show MoreWith the spread use of internet, especially the web of social media, an unusual quantity of information is found that includes a number of study fields such as psychology, entertainment, sociology, business, news, politics, and other cultural fields of nations. Data mining methodologies that deal with social media allows producing enjoyable scene on the human behaviour and interaction. This paper demonstrates the application and precision of sentiment analysis using traditional feedforward and two of recurrent neural networks (gated recurrent unit (GRU) and long short term memory (LSTM)) to find the differences between them. In order to test the system’s performance, a set of tests is applied on two public datasets. The firs
... Show MoreThe new 4-[(7-chloro-2,1,3-benzoxadiazole)azo]-4,5-diphenyl imidazole (L) have been synthesized and characterized by micro elemental and thermal analyses as well as 1H.NMR, FT-IR, and UV-Vis spectroscopic techniques. (L) acts as a ligand coordinating with some metal ionsV(IV), Fe(III), Co(II), Ni(II), Cu(II), and Zn(II). Structures of the new compounds were characterized by elemental and thermal analyses as well as FT-IR and UV-Vis Spectra. The magnetic properties and electrical conductivities of metal complexes were also determined. Study of the nature of the complexes formed in ethanol following the mole ratio method.. The work also include a theoretical treatment of the formed complexes in the gas phase, this was done using the (hyperch
... Show MoreMissing data is one of the problems that may occur in regression models. This problem is usually handled by deletion mechanism available in statistical software. This method reduces statistical inference values because deletion affects sample size. In this paper, Expectation Maximization algorithm (EM), Multicycle-Expectation-Conditional Maximization algorithm (MC-ECM), Expectation-Conditional Maximization Either (ECME), and Recurrent Neural Networks (RNN) are used to estimate multiple regression models when explanatory variables have some missing values. Experimental dataset were generated using Visual Basic programming language with missing values of explanatory variables according to a missing mechanism at random general pattern and s
... Show MoreSubstantial research has been performed on Building Information Modeling (BIM) in various topics, for instance, the use and benefit of BIM in design, construction, sustainable environment building, and Facility assets over the past several years. Although there are various studies on these topics, Building Information Modeling (BIM) awareness through facilities management is still relatively poor. The researcher's interest is increased in BIM study is based heavily upon the perception that it can facilitate the exchange and reuse of information during various project phases. This property and others can be used in the Iraqi Construction industry to motivate the government to eliminate the change resistance to use innovat
... Show MoreMost companies use social media data for business. Sentiment analysis automatically gathers analyses and summarizes this type of data. Managing unstructured social media data is difficult. Noisy data is a challenge to sentiment analysis. Since over 50% of the sentiment analysis process is data pre-processing, processing big social media data is challenging too. If pre-processing is carried out correctly, data accuracy may improve. Also, sentiment analysis workflow is highly dependent. Because no pre-processing technique works well in all situations or with all data sources, choosing the most important ones is crucial. Prioritization is an excellent technique for choosing the most important ones. As one of many Multi-Criteria Decision Mak
... Show More