Is structure needed for omnidirectional visual homing?
Upcoming fast vision techniques for finding image correspondences enable reliable real-time visual homing, i.e. the guidance of a mobile robot from a arbitrary start pose towards a goal pose defined by an image taken there. Two approaches emerge in the field that differ in the fact that the structure of the scene is estimated or not. In this paper, we compare these two approaches for the general case and especially for our application, being automatic wheelchair navigation.