Preferred Language
Articles
/
bsj-2678
Multifocus Images Fusion Based On Homogenity and Edges Measures
...Show More Authors

Image fusion is one of the most important techniques in digital image processing, includes the development of software to make the integration of multiple sets of data for the same location; It is one of the new fields adopted in solve the problems of the digital image, and produce high-quality images contains on more information for the purposes of interpretation, classification, segmentation and compression, etc. In this research, there is a solution of problems faced by different digital images such as multi focus images through a simulation process using the camera to the work of the fuse of various digital images based on previously adopted fusion techniques such as arithmetic techniques (BT, CNT and MLT), statistical techniques (LMM, RVS and WT) and spatial techniques (HPFA, HFA and HFM). As these techniques have been developed and build programs using the language MATLAB (b 2010). In this work homogeneity criteria have been suggested for evaluation fused digital image's quality, especially fine details. This criterion is correlation criteria to guess homogeneity in different regions within the image by taking a number of blocks of different regions in the image and different sizes and work shifted blocks per pixel. As dependence was on traditional statistical criteria such as (mean, standard deviation, and signal to noise ratio, mutual information and spatial frequency) and compared with the suggested criteria to the work. The results showed that the evaluation process was effective and well because it took into measure the quality of the homogenous regions.

Crossref
View Publication Preview PDF
Quick Preview PDF
Publication Date
Wed May 01 2019
Journal Name
Iraqi Journal Of Science
Optical Images Fusion Based on Linear Interpolation Methods
...Show More Authors

Merging images is one of the most important technologies in remote sensing applications and geographic information systems. In this study, a simulation process using a camera for fused images by using resizing image for interpolation methods (nearest, bilinear and bicubic). Statistical techniques have been used as an efficient merging technique in the images integration process employing different models namely Local Mean Matching (LMM) and Regression Variable Substitution (RVS), and apply spatial frequency techniques include high pass filter additive method (HPFA).  Thus, in the current research, statistical measures have been used to check the quality of the merged images. This has been carried out by calculating the correlation a

... Show More
View Publication Preview PDF
Publication Date
Mon Jan 27 2020
Journal Name
Iraqi Journal Of Science
A Proposed Approach to Determine the Edges in SAR images
...Show More Authors

Radar is the most eminent device in the prolonged scattering era The mechanisms involve using electromagnetic waves to take Synthetic Aperture Radar (SAR) images for long reaching. The process of setting edges is one of the important processes used in many fields, including radar images, which assists in showing objects such as mobile vehicles, ships, aircraft, and meteorological and terrain forms. In order to accurately identify these objects, their edges must be detected. Many old-style methods are used to isolate the edges but they do not give good results in the  determination process. Conservative methods use an operator to detect the edges, such as the Sobel operator which is used to perform edge detection where the edge does

... Show More
View Publication Preview PDF
Scopus (3)
Scopus Crossref
Publication Date
Thu Jun 08 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Satellite Images Fusion Using Modified PCA Substitution Method
...Show More Authors

In this paper, a new tunable approach for fusion the satellite images that fall in different electromagnetic wave ranges is presented, which gives us the ability to make one of the images features little superior on the other without reducing the general resultant image fusion quality, this approach is based on the principal component analysis (PCA) fusion method. A comparison made is between the results of the proposed approach and two fusion methods (they are: the PCA fusion method and the projection of eigenvectors on the bands fusion method), and the comparison results show the validity of this new method.

View Publication Preview PDF
Crossref (6)
Crossref
Publication Date
Sun Feb 25 2024
Journal Name
Baghdad Science Journal
Research on Emotion Classification Based on Multi-modal Fusion
...Show More Authors

Nowadays, people's expression on the Internet is no longer limited to text, especially with the rise of the short video boom, leading to the emergence of a large number of modal data such as text, pictures, audio, and video. Compared to single mode data ,the multi-modal data always contains massive information. The mining process of multi-modal information can help computers to better understand human emotional characteristics. However, because the multi-modal data show obvious dynamic time series features, it is necessary to solve the dynamic correlation problem within a single mode and between different modes in the same application scene during the fusion process. To solve this problem, in this paper, a feature extraction framework of

... Show More
View Publication Preview PDF
Scopus (1)
Scopus Crossref
Publication Date
Mon Feb 20 2017
Journal Name
Ibn Al-haitham Journal For Pure And Applied Sciences
Satellite Images Fusion Using Mapped Wavelet Transform Through PCA
...Show More Authors

In this paper a new fusion method is proposed to fuse multiple satellite images that are acquired through different electromagnetic spectrum ranges to produce a single gray scale image. The proposed method based on desecrate wavelet transform using pyramid and packet bases, the fusion process preformed using two different fusion rules, where the low frequency part is remapped through the use of PCA analysis basing on covariance matrix and correlation matrix, and the high frequency part is fused using different fusion rules (adding, selecting the higher, replacement), then the restored image is obtained by applying the inverse desecrate wavelet transform. The experimental results show the validity of the proposed fusion method to fuse suc

... Show More
View Publication Preview PDF
Publication Date
Tue Dec 07 2021
Journal Name
2021 14th International Conference On Developments In Esystems Engineering (dese)
Content Based Image Retrieval Based on Feature Fusion and Support Vector Machine
...Show More Authors

View Publication
Scopus (5)
Crossref (5)
Scopus Clarivate Crossref
Publication Date
Thu Dec 02 2021
Journal Name
Iraqi Journal Of Science
Quantitative Analysis based on Supervised Classification of Medical Image Fusion Techniques
...Show More Authors

Fusion can be described as the process of integrating information resulting from the collection of two or more images from different sources to form a single integrated image. This image will be more productive, informative, descriptive and qualitative as compared to original input images or individual images. Fusion technology in medical images is useful for the purpose of diagnosing disease and robot surgery for physicians. This paper describes different techniques for the fusion of medical images and their quality studies based on quantitative statistical analysis by studying the statistical characteristics of the image targets in the region of the edges and studying the differences between the classes in the image and the calculation

... Show More
View Publication Preview PDF
Publication Date
Tue Jun 30 2015
Journal Name
Al-khwarizmi Engineering Journal
Multi-Focus Image Fusion Based on Pixel Significance Using Counterlet Transform
...Show More Authors

Abstract

 The objective of image fusion is to merge multiple sources of images together in such a way that the final representation contains higher amount of useful information than any input one.. In this paper, a weighted average fusion method is proposed. It depends on using weights that are extracted from source images using counterlet transform. The extraction method is done by making the approximated transformed coefficients equal to zero, then taking the inverse counterlet transform to get the details of the images to be fused. The performance of the proposed algorithm has been verified on several grey scale and color test  images, and compared with some present methods.

... Show More
View Publication Preview PDF
Publication Date
Tue Feb 01 2022
Journal Name
Int. J. Nonlinear Anal. Appl.
Finger Vein Recognition Based on PCA and Fusion Convolutional Neural Network
...Show More Authors

Finger vein recognition and user identification is a relatively recent biometric recognition technology with a broad variety of applications, and biometric authentication is extensively employed in the information age. As one of the most essential authentication technologies available today, finger vein recognition captures our attention owing to its high level of security, dependability, and track record of performance. Embedded convolutional neural networks are based on the early or intermediate fusing of input. In early fusion, pictures are categorized according to their location in the input space. In this study, we employ a highly optimized network and late fusion rather than early fusion to create a Fusion convolutional neural network

... Show More
Publication Date
Fri Apr 30 2021
Journal Name
Iraqi Journal Of Science
Iris Identification Based on the Fusion of Multiple Methods
...Show More Authors

Iris recognition occupies an important rank among the biometric types of approaches as a result of its accuracy and efficiency. The aim of this paper is to suggest a developed system for iris identification based on the fusion of scale invariant feature transforms (SIFT) along with local binary patterns of features extraction. Several steps have been applied. Firstly, any image type was converted to  grayscale. Secondly, localization of the iris was achieved using circular Hough transform. Thirdly, the normalization to convert the polar value to Cartesian using Daugman’s rubber sheet models, followed by histogram equalization to enhance the iris region. Finally, the features were extracted by utilizing the scale invariant feature

... Show More
View Publication Preview PDF
Scopus (1)
Crossref (1)
Scopus Crossref