Practical algorithms for motor primitive learning in robotics
Jens Kober and Jan Peters
IEEE Robotics and Automation Magazine
TO date, most robots are still programmed by a smart operator who uses human understanding of the desired task to create a program for accomplishing the required behavior. While such specialized programming
is highly efficient, it is also expensive and limited to the situations the human operator had considered. For example, human programming has become the main bottleneck for manufacturing of low-cost products in low
numbers. This problem could be alleviated by robots that can learn new skills and improve their existing abilities autonomously. However, off-the-shelf machine learning techniques do not scale to high-dimensional,
anthropomorphic robots. Instead, robot learning requires methods that employ both representations and algorithms appropriate for this domain. When humans learn new motor skills, e.g., paddling a ball with a tabletennis racket, throwing darts, or hitting a tennis ball, it is highly likely that they rely on a small set of motor primitives (MPs) and use imitation as well as reinforcement learning (RL) . Inspired by this example, we
will discuss the technical counterparts in this article and show how both single-stroke and rhythmic tasks can be learned efficiently by mimicking the human presenter with subsequent reward-driven self-improvement.