PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

A reward-modulated Hebbian learning rule can explain experimentally observed network reorganization in a brain control task
Robert Legenstein, Steven M. Chase, Andrew B. Schwartz and Wolfgang Maass
Journal of Neuroscience 2009.

Abstract

It has recently been shown in a brain-computer interface experiment that motor cortical neurons change their tuning properties selectively to compensate for errors induced by displaced decoding parameters. In particular, it was shown that the 3D tuning curves of neurons whose decoding parameters were re-assigned changed more than those of neurons whose decoding parameters had not been re-assigned. In this article, we propose a simple learning rule that can reproduce this effect. Our learning rule uses Hebbian weight updates driven by a global reward signal and neuronal noise. In contrast to most previously proposed learning rules, this approach does not require extrinsic information to separate noise from signal. The learning rule is able to optimize the performance of a model system within biologically realistic periods of time under high noise levels. Furthermore, when the model parameters are matched to data recorded during the brain-computer interface learning experiments described above, the model produces learning effects strikingly similar to those found in the experiments.

EPrint Type:Article
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
Brain Computer Interfaces
Theory & Algorithms
ID Code:6081
Deposited By:Michael Pfeiffer
Deposited On:08 March 2010