The multi-focus image fusion method can fuse more than one focused image to generate a single image with more accurate description. The purpose of image fusion is to generate one image by combining information from many source images of the same scene. In this paper, a multi-focus image fusion method is proposed with a hybrid pixel level obtained in the spatial and transform domains. The proposed method is implemented on multi-focus source images in YCbCr color space. As the first step two-level stationary wavelet transform was applied on the Y channel of two source images. The fused Y channel is implemented by using many fusion rule techniques. The Cb and Cr channels of the source images are fused using principal component analysis (PCA). The proposed method performance is evaluated in terms of PSNR, RMSE and SSIM. The results show that the fusion quality of the proposed algorithm is better than obtained by several other fusion methods, including SWT, PCA with RGB source images and PCA with YCbCr source images.
The field of autonomous robotic systems has advanced tremendously in the last few years, allowing them to perform complicated tasks in various contexts. One of the most important and useful applications of guide robots is the support of the blind. The successful implementation of this study requires a more accurate and powerful self-localization system for guide robots in indoor environments. This paper proposes a self-localization system for guide robots. To successfully implement this study, images were collected from the perspective of a robot inside a room, and a deep learning system such as a convolutional neural network (CNN) was used. An image-based self-localization guide robot image-classification system delivers a more accura
... Show MoreFace detection systems are based on the assumption that each individual has a unique face structure and that computerized face matching is possible using facial symmetry. Face recognition technology has been employed for security purposes in many organizations and businesses throughout the world. This research examines the classifications in machine learning approaches using feature extraction for the facial image detection system. Due to its high level of accuracy and speed, the Viola-Jones method is utilized for facial detection using the MUCT database. The LDA feature extraction method is applied as an input to three algorithms of machine learning approaches, which are the J48, OneR, and JRip classifiers. The experiment’s
... Show MoreMathematical integration techniques rely on mathematical relationships such as addition, subtraction, division, and subtraction to merge images with different resolutions to achieve the best effect of the merger. In this study, a simulation is adopted to correct the geometric and radiometric distortion of satellite images based on mathematical integration techniques, including Brovey Transform (BT), Color Normalization Transform (CNT), and Multiplicative Model (MM). Also, interpolation methods, namely the nearest neighborhood, Bi-linear, and Bi-cubic were adapted to the images captured by an optical camera. The evaluation of images resulting from the integration process was performed using several types of measures; the first type depend
... Show MoreImage processing is an important source for the image
analytical in order to get variable parameters such as the
intensity .In present work it has been found a relation between the tensity and number of pixd in the image , and from this relation we have got in this paper the inten
... Show MoreIn this paper three techniques for image compression are implemented. The proposed techniques consist of three dimension (3-D) two level discrete wavelet transform (DWT), 3-D two level discrete multi-wavelet transform (DMWT) and 3-D two level hybrid (wavelet-multiwavelet transform) technique. Daubechies and Haar are used in discrete wavelet transform and Critically Sampled preprocessing is used in discrete multi-wavelet transform. The aim is to maintain to increase the compression ratio (CR) with respect to increase the level of the transformation in case of 3-D transformation, so, the compression ratio is measured for each level. To get a good compression, the image data properties, were measured, such as, image entropy (He), percent root-
... Show MoreThe computer vision branch of the artificial intelligence field is concerned with developing algorithms for analyzing video image content. Extracting edge information, which is the essential process in most pictorial pattern recognition problems. A new method of edge detection technique has been introduces in this research, for detecting boundaries.
Selection of typical lossy techniques for encoding edge video images are also discussed in this research. The concentration is devoted to discuss the Block-Truncation coding technique and Discrete Cosine Transform (DCT) coding technique. In order to reduce the volume of pictorial data which one may need to store or transmit,
... Show MoreIn this paper two main stages for image classification has been presented. Training stage consists of collecting images of interest, and apply BOVW on these images (features extraction and description using SIFT, and vocabulary generation), while testing stage classifies a new unlabeled image using nearest neighbor classification method for features descriptor. Supervised bag of visual words gives good result that are present clearly in the experimental part where unlabeled images are classified although small number of images are used in the training process.