Registration techniques are still considered challenging tasks to remote sensing users, especially after enormous increase in the volume of remotely sensed data being acquired by an ever-growing number of earth observation sensors. This surge in use mandates the development of accurate and robust registration procedures that can handle these data with varying geometric and radiometric properties. This paper aims to develop the traditional registration scenarios to reduce discrepancies between registered datasets in two dimensions (2D) space for remote sensing images. This is achieved by designing a computer program written in Visual Basic language following two main stages: The first stage is a traditional registration process by defining a set of control point pairs using manual selection, then comput the parameters of global affine transformation model to match them and resample the images. The second stage included matching process refinement by determining the shift value in control points (CPs) location depending on radiometric similarity measure. Then shift map technique was adjusted to adjust the process using 2nd order polynomial transformation function. This function has chosen after conducting statistical analyses, comparing between the common transformation functions (similarity, affine, projection and 2nd order polynomial). The results showed that the developed approach reduced the root mean square error (RMSE) of registration process and decreasing the discrepancies between registered datasets with 60%, 57% and 48% respectively for each one of the three tested datasets.
Researchers dream of developing autonomous humanoid robots which behave/walk like a human being. Biped robots, although complex, have the greatest potential for use in human-centred environments such as the home or office. Studying biped robots is also important for understanding human locomotion and improving control strategies for prosthetic and orthotic limbs. Control systems of humans walking in cluttered environments are complex, however, and may involve multiple local controllers and commands from the cerebellum. Although biped robots have been of interest over the last four decades, no unified stability/balance criterion adopted for stabilization of miscellaneous walking/running modes of biped
Fluoroscopic images are a field of medical images that depends on the quality of image for correct diagnosis; the main trouble is the de-nosing and how to keep the poise between degradation of noisy image, from one side, and edge and fine details preservation, from the other side, especially when fluoroscopic images contain black and white type noise with high density. The previous filters could usually handle low/medium black and white type noise densities, that expense edge, =fine details preservation and fail with high density of noise that corrupts the images. Therefore, this paper proposed a new Multi-Line algorithm that deals with high-corrupted image with high density of black and white type noise. The experiments achieved i
... Show MoreA roundabout is a highway engineering concept meant to calm traffic, increase safety, reduce stop-and-go travel, reduce accidents and congestion, and decrease traffic delays. It is circular and facilitates one-way traffic flow around a central point. The first part of this study evaluated the principles and methods used to compare the capacity methods of roundabouts with different traffic conditions and geometric configurations. These methods include gap acceptance, empirical, and simulation software methods. Previous studies mentioned in this research used various methods and other new models developed by several researchers. However, this paper's main aim is to compare different roundabout capacity models for acceptabl
... Show MoreBackground: Maxillary canines are important aesthetically and functionally, but impacted canines are more difficult and time consuming to treat, the aim of this study is to investigate with multi-detector computed tomography the correlation between the bone density and the upper canine impaction. Material and method: A sample of Unilaterally impacted maxillary canines from 24 patients (19 female, 5 male) who were referred to accurately localize the impacted canines at al- Karkh general hospital were evaluated by a volumetric 3-d images by the multi-detector computed tomography to accurately measure the bone density of the maxillary cortical palate of the maxillary impacted canine side and compare it with the other side of the normally erupt
... Show MoreIn this work, polyvinylpyrrolidone (PVP), Multi-walled carbon nanotubes (MWCNTs) nanocomposite was prepared and hybrid with Graphene (Gr) by casting method. The morphological and optical properties were investigated. Fourier Transformer-Infrared (FT-IR) indicates the presence of primary distinctive peaks belonging to vibration groups that describe the prepared samples. Scanning Electron Microscopy (SEM) images showed a uniform dispersion of graphene within the PVP-MWCNT nanocomposite. The results of the optical study show decrease in the energy gap with increasing MWCNT and graphene concentration. The absorption coefficient spectra indicate the presence of two absorption peaks at 282 and 287 nm attributed to the π-π* electronic tr
... Show MoreThe solution casting method was used to prepare a polyvinylpyrrolidone (PVP)/Multi-walled carbon nanotubes (MWCNTs) nanocomposite with Graphene (Gr). Field Effect Scanning Electron Microscope (FESEM) and Fourier Transformer Infrared (FTIR) were used to characterize the surface morphology and optical properties of samples. FESEM images revealed a uniform distribution of graphene within the PVP-MWCNT nanocomposite. The FTIR spectra confirmed the nanocomposite information is successful with apperaring the presence of primary distinct peaks belonging to vibration groups that describe the prepared samples.. Furthermore, found that the DC electrical conductivity of the prepared nanocomposites increases with increasing MWCNT concentratio
... Show MoreIt is the regression analysis is the foundation stone of knowledge of statistics , which mostly depends on the ordinary least square method , but as is well known that the way the above mentioned her several conditions to operate accurately and the results can be unreliable , add to that the lack of certain conditions make it impossible to complete the work and analysis method and among those conditions are the multi-co linearity problem , and we are in the process of detected that problem between the independent variables using farrar –glauber test , in addition to the requirement linearity data and the lack of the condition last has been resorting to the
... Show MoreIn this paper, one of the Machine Scheduling Problems is studied, which is the problem of scheduling a number of products (n-jobs) on one (single) machine with the multi-criteria objective function. These functions are (completion time, the tardiness, the earliness, and the late work) which formulated as . The branch and bound (BAB) method are used as the main method for solving the problem, where four upper bounds and one lower bound are proposed and a number of dominance rules are considered to reduce the number of branches in the search tree. The genetic algorithm (GA) and the particle swarm optimization (PSO) are used to obtain two of the upper bounds. The computational results are calculated by coding (progr
... Show MoreExtractive multi-document text summarization – a summarization with the aim of removing redundant information in a document collection while preserving its salient sentences – has recently enjoyed a large interest in proposing automatic models. This paper proposes an extractive multi-document text summarization model based on genetic algorithm (GA). First, the problem is modeled as a discrete optimization problem and a specific fitness function is designed to effectively cope with the proposed model. Then, a binary-encoded representation together with a heuristic mutation and a local repair operators are proposed to characterize the adopted GA. Experiments are applied to ten topics from Document Understanding Conference DUC2002 datas
... Show MoreAbstract
The objective of image fusion is to merge multiple sources of images together in such a way that the final representation contains higher amount of useful information than any input one.. In this paper, a weighted average fusion method is proposed. It depends on using weights that are extracted from source images using counterlet transform. The extraction method is done by making the approximated transformed coefficients equal to zero, then taking the inverse counterlet transform to get the details of the images to be fused. The performance of the proposed algorithm has been verified on several grey scale and color test images, and compared with some present methods.
... Show More