Pan sharpening (fusion image) is the procedure of merging suitable information from two or more images into a single image. The image fusion techniques allow the combination of different information sources to improve the quality of image and increase its utility for a particular application. In this research, six pan-sharpening method have been implemented between the panchromatic and multispectral images, these methods include Ehlers, color normalize, Gram-Schmidt, local mean and variance matching, Daubechies of rank two and Symlets of rank four wavelet transform. Two images captured by two different sensors such as landsat-8 and world view-2 have been adopted to achieve the fusion purpose. Different fidelity metric like MSE, RMSE, PSNR, Cc, ERGAS and RASE have been used to achieve the comparison among the fusion methods. The results show that Daubechies wavelet (db2) transform was good method for pan sharpening images. Where good statistical values have been obtained, when it is applied on the first and second image that are captured by different sensors with different spatial resolution
Due to the vast using of digital images and the fast evolution in computer science and especially the using of images in the social network.This lead to focus on securing these images and protect it against attackers, many techniques are proposed to achieve this goal. In this paper we proposed a new chaotic method to enhance AES (Advanced Encryption Standards) by eliminating Mix-Columns transformation to reduce time consuming and using palmprint biometric and Lorenz chaotic system to enhance authentication and security of the image, by using chaotic system that adds more sensitivity to the encryption system and authentication for the system.
Image retrieval is used in searching for images from images database. In this paper, content – based image retrieval (CBIR) using four feature extraction techniques has been achieved. The four techniques are colored histogram features technique, properties features technique, gray level co- occurrence matrix (GLCM) statistical features technique and hybrid technique. The features are extracted from the data base images and query (test) images in order to find the similarity measure. The similarity-based matching is very important in CBIR, so, three types of similarity measure are used, normalized Mahalanobis distance, Euclidean distance and Manhattan distance. A comparison between them has been implemented. From the results, it is conclud
... Show MoreGroupwise non-rigid image alignment is a difficult non-linear optimization problem involving many parameters and often large datasets. Previous methods have explored various metrics and optimization strategies. Good results have been previously achieved with simple metrics, requiring complex optimization, often with many unintuitive parameters that require careful tuning for each dataset. In this chapter, the problem is restructured to use a simpler, iterative optimization algorithm, with very few free parameters. The warps are refined using an iterative Levenberg-Marquardt minimization to the mean, based on updating the locations of a small number of points and incorporating a stiffness constraint. This optimization approach is eff
... Show MoreIn the reverse engineering approach, a massive amount of point data is gathered together during data acquisition and this leads to larger file sizes and longer information data handling time. In addition, fitting of surfaces of these data point is time-consuming and demands particular skills. In the present work a method for getting the control points of any profile has been presented. Where, many process for an image modification was explained using Solid Work program, and a parametric equation of the profile that proposed has been derived using Bezier technique with the control points that adopted. Finally, the proposed profile was machined using 3-aixs CNC milling machine and a compression in dimensions process has been occurred betwe
... Show MoreThe computer vision branch of the artificial intelligence field is concerned with developing algorithms for analyzing video image content. Extracting edge information, which is the essential process in most pictorial pattern recognition problems. A new method of edge detection technique has been introduces in this research, for detecting boundaries.
Selection of typical lossy techniques for encoding edge video images are also discussed in this research. The concentration is devoted to discuss the Block-Truncation coding technique and Discrete Cosine Transform (DCT) coding technique. In order to reduce the volume of pictorial data which one may need to store or transmit,
... Show MoreBackground: The aims of the study were to evaluate the unclean/clean root canal surface areas with a histopathological cross section view of the root canal and the isthmus and to evaluate the efficiency of instrumentation to the isthmus using different rotary instrumentation techniques. Materials and Methods:The mesial roots of thirty human mandibular molars were divided into six groups, each group was composed of five roots (10 root canals)which prepared and irrigated as: Group one A: Protaper system to size F2 and hypodermic syringe, Group one B: Protaper system to size F2 and endoactivator system, Group two A:Wave One small then primary file and hypodermic syringe, Group two B:Wave One small then primary file and endoactivator system, Gr
... Show Morewater quality assessment is still being done at specific locations of major concern. The use of Geographical Information System (GIS) based water quality information system and spatial analysis with Inverse Distance Weighted interpolation enabled the mapping of water quality indicators along Tigris river in Salah Al-Din government, Iraq. Water quality indicators were monitored by taking 13 river samples from different locations along the river during Winter season year 2020. Maps of 10 water quality indicators. This meant that the specific water quality indicator and diffuse pollution characteristics in the basin were better illustrated with the variations displayed along the course of the river than conventional line graphs. Creation of
... Show MoreDatabase is characterized as an arrangement of data that is sorted out and disseminated in a way that allows the client to get to the data being put away in a simple and more helpful way. However, in the era of big-data the traditional methods of data analytics may not be able to manage and process the large amount of data. In order to develop an efficient way of handling big-data, this work studies the use of Map-Reduce technique to handle big-data distributed on the cloud. This approach was evaluated using Hadoop server and applied on EEG Big-data as a case study. The proposed approach showed clear enhancement for managing and processing the EEG Big-data with average of 50% reduction on response time. The obtained results provide EEG r
... Show More