The multi-focus image fusion method can fuse more than one focused image to generate a single image with more accurate description. The purpose of image fusion is to generate one image by combining information from many source images of the same scene. In this paper, a multi-focus image fusion method is proposed with a hybrid pixel level obtained in the spatial and transform domains. The proposed method is implemented on multi-focus source images in YCbCr color space. As the first step two-level stationary wavelet transform was applied on the Y channel of two source images. The fused Y channel is implemented by using many fusion rule techniques. The Cb and Cr channels of the source images are fused using principal component analysis (PCA).
... Show MoreThe denoising of a natural image corrupted by Gaussian noise is a problem in signal or image processing. Much work has been done in the field of wavelet thresholding but most of it was focused on statistical modeling of wavelet coefficients and the optimal choice of thresholds. This paper describes a new method for the suppression of noise in image by fusing the stationary wavelet denoising technique with adaptive wiener filter. The wiener filter is applied to the reconstructed image for the approximation coefficients only, while the thresholding technique is applied to the details coefficients of the transform, then get the final denoised image is obtained by combining the two results. The proposed method was applied by usin
... Show MoreHome New Trends in Information and Communications Technology Applications Conference paper Audio Compression Using Transform Coding with LZW and Double Shift Coding Zainab J. Ahmed & Loay E. George Conference paper First Online: 11 January 2022 126 Accesses Part of the Communications in Computer and Information Science book series (CCIS,volume 1511) Abstract The need for audio compression is still a vital issue, because of its significance in reducing the data size of one of the most common digital media that is exchanged between distant parties. In this paper, the efficiencies of two audio compression modules were investigated; the first module is based on discrete cosine transform and the second module is based on discrete wavelet tr
... Show MoreImage steganography is undoubtedly significant in the field of secure multimedia communication. The undetectability and high payload capacity are two of the important characteristics of any form of steganography. In this paper, the level of image security is improved by combining the steganography and cryptography techniques in order to produce the secured image. The proposed method depends on using LSBs as an indicator for hiding encrypted bits in dual tree complex wavelet coefficient DT-CWT. The cover image is divided into non overlapping blocks of size (3*3). After that, a Key is produced by extracting the center pixel (pc) from each block to encrypt each character in the secret text. The cover image is converted using DT-CWT, then the p
... Show MoreThe aim of this study is to compare the effects of three methods: problem-based learning (PBL), PBL with lecture method, and conventional teaching on self-directed learning skills among physics undergraduates. The actual sample size comprises of 122 students, who were selected randomly from the Physics Department, College of Education in Iraq. In this study, the pre- and post-test were done and the instruments were administered to the students for data collection. The data was analyzed and statistical results rejected null hypothesis of this study. This study revealed that there are no signifigant differences between PBL and PBL with lecture method, thus the PBL without or with lecture method enhances the self-directed learning skills bette
... Show MoreIn recent years, English language teaching and second language acquisition has demonstrated a significant accentuation upon basic reasoning abilities improvement in the language capability advancement. Encouraging a point of view of duty to training basic intuition aptitudes in accordance with the English language courses, this paper gives an account of an investigation directed at theoretical meanings of basic deduction, drifts about the centrality of basic speculation for language educating and associations between critical thinking and language learning. The educators have the focal pretended by basic intuition in successful language teaching method, identified to Ennis’ (2011) critical thinking categories. The skill of thinking critic
... Show MoreBackground/Objectives: The purpose of this study was to classify Alzheimer’s disease (AD) patients from Normal Control (NC) patients using Magnetic Resonance Imaging (MRI). Methods/Statistical analysis: The performance evolution is carried out for 346 MR images from Alzheimer's Neuroimaging Initiative (ADNI) dataset. The classifier Deep Belief Network (DBN) is used for the function of classification. The network is trained using a sample training set, and the weights produced are then used to check the system's recognition capability. Findings: As a result, this paper presented a novel method of automated classification system for AD determination. The suggested method offers good performance of the experiments carried out show that the
... Show MoreGenome sequencing has significantly improved the understanding of HIV and AIDS through accurate data on viral transmission, evolution and anti-therapeutic processes. Deep learning algorithms, like the Fined-Tuned Gradient Descent Fused Multi-Kernal Convolutional Neural Network (FGD-MCNN), can predict strain behaviour and evaluate complex patterns. Using genotypic-phenotypic data obtained from the Stanford University HIV Drug Resistance Database, the FGD-MCNN created three files covering various antiretroviral medications for HIV predictions and drug resistance. These files include PIs, NRTIs and NNRTIs. FGD-MCNNs classify genetic sequences as vulnerable or resistant to antiretroviral drugs by analyzing chromosomal information and id
... Show MoreData scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for