Feature based omnidirectional sparse visual path following
Vision sensors are attractive for autonomous robots because they are a rich source of environment information. The main challenge in using images for mobile robots is managing this wealth of information. A relatively recent approach is the use of fast wide baseline local features, which we developed and used in the novel approach to sparse visual path following described in this paper. These local features have the great advantage that they can be recognized even if the viewpoint differs significantly. This opens the door to a memory efficient description of a path by descriptors of sparse images. We propose a method for reexecution of these paths by a series of visual homing operations which yield a navigation method with unique properties: it is accurate, robust, fast, and without odometry error build-up.