Driving me Around the Bend: Learning to Drive from Visual Gist
Nicolas Pugeault and Richard Bowden
In: 1st IEEE Workshop on Challenges and Opportunities in Robotic Perception, jointly with ICCV'2011, 12 Nov 2011, Barcelona, Spain.
This article proposes an approach to learning steering and road following behaviour from a human driver using holistic visual features. We use a random forest (RF) to regress a mapping between these features and
the driver’s actions, and propose an alternative to classical random forest regression based on the Medoid (RF-Medoid), that reduces the underestimation of extreme control values. We compare prediction performance using different holistic visual descriptors: GIST, Channel-GIST (C-GIST) and Pyramidal-HOG (P-HOG). The proposed methods are evaluated on two different datasets: predicting human behaviour on countryside roads and also for autonomous control of a robot on an indoor track. We show that 1) C-GIST leads to the best predictions on both se-
quences, and 2) RF-Medoid leads to a better estimation of extreme values, where a classical RF tends to under-steer.
We use around 10% of the data for training and show excellent generalization over a dataset of thousands of images. Importantly, we do not engineer the solution but instead use machine learning to automatically identify the relationship between visual features and behaviour, providing an efficient, generic solution to autonomous control.