Preferred Language
Articles
/
2hZKmYcBVTCNdQwCeFdz
Multi -Focus Image Fusion Based on Stationary Wavelet Transform and PCA on YCBCR Color Space
...Show More Authors

The multi-focus image fusion method can fuse more than one focused image to generate a single image with more accurate description. The purpose of image fusion is to generate one image by combining information from many source images of the same scene. In this paper, a multi-focus image fusion method is proposed with a hybrid pixel level obtained in the spatial and transform domains. The proposed method is implemented on multi-focus source images in YCbCr color space. As the first step two-level stationary wavelet transform was applied on the Y channel of two source images. The fused Y channel is implemented by using many fusion rule techniques. The Cb and Cr channels of the source images are fused using principal component analysis (PCA). The proposed method performance is evaluated in terms of PSNR, RMSE and SSIM. The results show that the fusion quality of the proposed algorithm is better than obtained by several other fusion methods, including SWT, PCA with RGB source images and PCA with YCbCr source images.

Scopus Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Sat Jan 01 2022
Journal Name
Food Science And Technology
Study on herbicide residues in soybean processing based on UPLC-MS/MS detection
...Show More Authors

View Publication
Scopus (3)
Crossref (4)
Scopus Clarivate Crossref
Publication Date
Mon Apr 03 2023
Journal Name
Journal Of Al-qadisiyah For Computer Science And Mathematics
A General Overview on the Categories of Image Features Extraction Techniques: A Survey
...Show More Authors

In the image processing’s field and computer vision it’s important to represent the image by its information. Image information comes from the image’s features that extracted from it using feature detection/extraction techniques and features description. Features in computer vision define informative data. For human eye its perfect to extract information from raw image, but computer cannot recognize image information. This is why various feature extraction techniques have been presented and progressed rapidly. This paper presents a general overview of the feature extraction categories for image.

View Publication Preview PDF
Crossref
Publication Date
Wed May 31 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Retrieving Image from Noisy Version depending on Multiwavelet Soft-Thresholding with Smoothing Filter
...Show More Authors

In this paper, we describe a new method for image denoising. We analyze properties of the Multiwavelet coefficients of natural images. Also it suggests a method for computing the Multiwavelet transform using the 1st order approximation. This paper describes a simple and effective model for noise removal through suggesting a new technique for retrieving the image by allowing us to estimate it from the noisy image. The proposed algorithm depends on mixing both soft-thresholds with Mean filter and applying concurrently on noisy image by dividing into blocks of equal size (for concurrent processed to increase the performance of the enhancement process and to decease the time that is needed for implementation by applying the proposed algorith

... Show More
View Publication Preview PDF
Publication Date
Sun Jan 20 2019
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Text Classification Based on Weighted Extreme Learning Machine
...Show More Authors

The huge amount of documents in the internet led to the rapid need of text classification (TC). TC is used to organize these text documents. In this research paper, a new model is based on Extreme Machine learning (EML) is used. The proposed model consists of many phases including: preprocessing, feature extraction, Multiple Linear Regression (MLR) and ELM. The basic idea of the proposed model is built upon the calculation of feature weights by using MLR. These feature weights with the extracted features introduced as an input to the ELM that produced weighted Extreme Learning Machine (WELM). The results showed   a great competence of the proposed WELM compared to the ELM. 

View Publication Preview PDF
Crossref (3)
Crossref
Publication Date
Thu Aug 01 2019
Journal Name
2019 2nd International Conference On Engineering Technology And Its Applications (iiceta)
Human Gait Identification System Based on Average Silhouette
...Show More Authors

View Publication
Scopus (1)
Scopus Crossref
Publication Date
Mon Apr 15 2019
Journal Name
Proceedings Of The International Conference On Information And Communication Technology
Hybrid LDPC-STBC communications system based on chaos
...Show More Authors

View Publication
Scopus Clarivate Crossref
Publication Date
Sun Sep 24 2023
Journal Name
Journal Of Al-qadisiyah For Computer Science And Mathematics
Iris Data Compression Based on Hexa-Data Coding
...Show More Authors

Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the origin

... Show More
View Publication
Crossref
Publication Date
Sun Apr 23 2017
Journal Name
International Conference Of Reliable Information And Communication Technology
Classification of Arabic Writer Based on Clustering Techniques
...Show More Authors

Arabic text categorization for pattern recognitions is challenging. We propose for the first time a novel holistic method based on clustering for classifying Arabic writer. The categorization is accomplished stage-wise. Firstly, these document images are sectioned into lines, words, and characters. Secondly, their structural and statistical features are obtained from sectioned portions. Thirdly, F-Measure is used to evaluate the performance of the extracted features and their combination in different linkage methods for each distance measures and different numbers of groups. Finally, experiments are conducted on the standard KHATT dataset of Arabic handwritten text comprised of varying samples from 1000 writers. The results in the generatio

... Show More
Scopus (6)
Scopus
Publication Date
Mon Mar 01 2021
Journal Name
Iop Conference Series: Materials Science And Engineering
Speech Enhancement Algorithm Based on a Hybrid Estimator
...Show More Authors
Abstract<p>Speech is the essential way to interact between humans or between human and machine. However, it is always contaminated with different types of environment noise. Therefore, speech enhancement algorithms (SEA) have appeared as a significant approach in speech processing filed to suppress background noise and return back the original speech signal. In this paper, a new efficient two-stage SEA with low distortion is proposed based on minimum mean square error sense. The estimation of clean signal is performed by taking the advantages of Laplacian speech and noise modeling based on orthogonal transform (Discrete Krawtchouk-Tchebichef transform) coefficients distribution. The Discrete Kra</p> ... Show More
View Publication
Crossref (11)
Crossref
Publication Date
Sun Jun 01 2014
Journal Name
Baghdad Science Journal
Classification of fetal abnormalities based on CTG signal
...Show More Authors

The fetal heart rate (FHR) signal processing based on Artificial Neural Networks (ANN),Fuzzy Logic (FL) and frequency domain Discrete Wavelet Transform(DWT) were analysis in order to perform automatic analysis using personal computers. Cardiotocography (CTG) is a primary biophysical method of fetal monitoring. The assessment of the printed CTG traces was based on the visual analysis of patterns that describing the variability of fetal heart rate signal. Fetal heart rate data of pregnant women with pregnancy between 38 and 40 weeks of gestation were studied. The first stage in the system was to convert the cardiotocograghy (CTG) tracing in to digital series so that the system can be analyzed ,while the second stage ,the FHR time series was t

... Show More
View Publication Preview PDF
Crossref