Preferred Language
Articles
/
bsj-2678
Multifocus Images Fusion Based On Homogenity and Edges Measures
...Show More Authors

Image fusion is one of the most important techniques in digital image processing, includes the development of software to make the integration of multiple sets of data for the same location; It is one of the new fields adopted in solve the problems of the digital image, and produce high-quality images contains on more information for the purposes of interpretation, classification, segmentation and compression, etc. In this research, there is a solution of problems faced by different digital images such as multi focus images through a simulation process using the camera to the work of the fuse of various digital images based on previously adopted fusion techniques such as arithmetic techniques (BT, CNT and MLT), statistical techniques (LMM, RVS and WT) and spatial techniques (HPFA, HFA and HFM). As these techniques have been developed and build programs using the language MATLAB (b 2010). In this work homogeneity criteria have been suggested for evaluation fused digital image's quality, especially fine details. This criterion is correlation criteria to guess homogeneity in different regions within the image by taking a number of blocks of different regions in the image and different sizes and work shifted blocks per pixel. As dependence was on traditional statistical criteria such as (mean, standard deviation, and signal to noise ratio, mutual information and spatial frequency) and compared with the suggested criteria to the work. The results showed that the evaluation process was effective and well because it took into measure the quality of the homogenous regions.

Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Mon May 01 2023
Journal Name
Journal Of Economics And Administrative Sciences (jeas)
Using Statistical Methods to Increase the Contrast Level in Digital Images
...Show More Authors

This research deals with the use of a number of statistical methods, such as the kernel method, watershed, histogram and cubic spline, to improve the contrast of digital images. The results obtained according to the RSME and NCC standards have proven that the spline method is the most accurate in the results compared to other statistical methods

Publication Date
Tue Jun 01 2021
Journal Name
Iraqi Journal Of Physics
Studying Audio Capacity as Carrier of Secret Images in Steganographic System
...Show More Authors

Steganography art is a technique for hiding information where the unsuspicious cover signal carrying the secret information. Good steganography technique must be includes the important criterions robustness, security, imperceptibility and capacity. The improving each one of these criterions is affects on the others, because of these criterions are overlapped each other.  In this work, a good high capacity audio steganography safely method has been proposed based on LSB random replacing of encrypted cover with encrypted message bits at random positions. The research also included a capacity studying for the audio file, speech or music, by safely manner to carrying secret images, so it is difficult for unauthorized persons to suspect

... Show More
View Publication Preview PDF
Crossref
Publication Date
Tue Oct 25 2022
Journal Name
Minar Congress 6
HANDWRITTEN DIGITS CLASSIFICATION BASED ON DISCRETE WAVELET TRANSFORM AND SPIKE NEURAL NETWORK
...Show More Authors

In this paper, a handwritten digit classification system is proposed based on the Discrete Wavelet Transform and Spike Neural Network. The system consists of three stages. The first stage is for preprocessing the data and the second stage is for feature extraction, which is based on Discrete Wavelet Transform (DWT). The third stage is for classification and is based on a Spiking Neural Network (SNN). To evaluate the system, two standard databases are used: the MADBase database and the MNIST database. The proposed system achieved a high classification accuracy rate with 99.1% for the MADBase database and 99.9% for the MNIST database

View Publication Preview PDF
Publication Date
Sun Sep 01 2013
Journal Name
Baghdad Science Journal
Synthesis and Characterization of New phenolic Schiff bases Derivatives Based on Terephthaladehyde
...Show More Authors

A variety of new phenolic Schiff bases derivatives have been synthesized starting from Terephthaladehyde compound, all proposed structures were supported by FTIR, 1H-NMR, 13C-NMR, Elemental analysis, some derivatives evaluated by Thermal analysis (TGA).

View Publication Preview PDF
Crossref
Publication Date
Sun Dec 01 2013
Journal Name
Diyala Journal Of Engineering Sciences
Design and Simulation of parallel CDMA System Based on 3D-Hadamard Transform
...Show More Authors

Future wireless systems aim to provide higher transmission data rates, improved spectral efficiency and greater capacity. In this paper a spectral efficient two dimensional (2-D) parallel code division multiple access (CDMA) system is proposed for generating and transmitting (2-D CDMA) symbols through 2-D Inter-Symbol Interference (ISI) channel to increase the transmission speed. The 3D-Hadamard matrix is used to generate the 2-D spreading codes required to spread the two-dimensional data for each user row wise and column wise. The quadrature amplitude modulation (QAM) is used as a data mapping technique due to the increased spectral efficiency offered. The new structure simulated using MATLAB and a comparison of performance for ser

... Show More
View Publication Preview PDF
Crossref
Publication Date
Mon Dec 31 2018
Journal Name
Journal Of Theoretical And Applied Information Technology
Fingerprints Identification and Verification Based on Local Density Distribution with Rotation Compensation
...Show More Authors

The fingerprints are the more utilized biometric feature for person identification and verification. The fingerprint is easy to understand compare to another existing biometric type such as voice, face. It is capable to create a very high recognition rate for human recognition. In this paper the geometric rotation transform is applied on fingerprint image to obtain a new level of features to represent the finger characteristics and to use for personal identification; the local features are used for their ability to reflect the statistical behavior of fingerprint variation at fingerprint image. The proposed fingerprint system contains three main stages, they are: (i) preprocessing, (ii) feature extraction, and (iii) matching. The preprocessi

... Show More
View Publication Preview PDF
Publication Date
Sun Jun 11 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Digital Watermarking in Color Image Based On Joint Between DCT and DWT
...Show More Authors

The massive distribution and development in the digital images field with friendly software, that leads to produce unauthorized use. Therefore the digital watermarking as image authentication has been developed for those issues. In this paper, we presented a method depending on the embedding stage and extraction stag. Our development is made by combining Discrete Wavelet Transform (DWT) with Discrete Cosine Transform (DCT) depending on the fact that combined the two transforms will reduce the drawbacks that appears during the recovered watermark or the watermarked image quality of each other, that results in effective rounding method, this is achieved by changing the wavelets coefficients of selected DWT sub bands (HL or HH), followed by

... Show More
View Publication Preview PDF
Crossref
Publication Date
Mon Apr 15 2024
Journal Name
Journal Of Engineering Science And Technology
Text Steganography Based on Arabic Characters Linguistic Features and Word Shifting Method
...Show More Authors

In the field of data security, the critical challenge of preserving sensitive information during its transmission through public channels takes centre stage. Steganography, a method employed to conceal data within various carrier objects such as text, can be proposed to address these security challenges. Text, owing to its extensive usage and constrained bandwidth, stands out as an optimal medium for this purpose. Despite the richness of the Arabic language in its linguistic features, only a small number of studies have explored Arabic text steganography. Arabic text, characterized by its distinctive script and linguistic features, has gained notable attention as a promising domain for steganographic ventures. Arabic text steganography harn

... Show More
Publication Date
Mon Jan 01 2024
Journal Name
Baghdad Science Journal
Proposed Hybrid Cryptosystems Based on Modifications of Playfair Cipher and RSA Cryptosystem
...Show More Authors

Cipher security is becoming an important step when transmitting important information through networks. The algorithms of cryptography play major roles in providing security and avoiding hacker attacks. In this work two hybrid cryptosystems have been proposed, that combine a modification of the symmetric cryptosystem Playfair cipher called the modified Playfair cipher and two modifications of the asymmetric cryptosystem RSA called the square of RSA technique and the square RSA with Chinese remainder theorem technique. The proposed hybrid cryptosystems have two layers of encryption and decryption. In the first layer the plaintext is encrypted using modified Playfair to get the cipher text, this cipher text will be encrypted using squared

... Show More
View Publication Preview PDF
Scopus (1)
Scopus Crossref
Publication Date
Sun Dec 31 2023
Journal Name
Iraqi Journal Of Information And Communication Technology
EEG Signal Classification Based on Orthogonal Polynomials, Sparse Filter and SVM Classifier
...Show More Authors

This work implements an Electroencephalogram (EEG) signal classifier. The implemented method uses Orthogonal Polynomials (OP) to convert the EEG signal samples to moments. A Sparse Filter (SF) reduces the number of converted moments to increase the classification accuracy. A Support Vector Machine (SVM) is used to classify the reduced moments between two classes. The proposed method’s performance is tested and compared with two methods by using two datasets. The datasets are divided into 80% for training and 20% for testing, with 5 -fold used for cross-validation. The results show that this method overcomes the accuracy of other methods. The proposed method’s best accuracy is 95.6% and 99.5%, respectively. Finally, from the results, it

... Show More
View Publication Preview PDF
Crossref