PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Hebbian learning of Bayes optimal decisions
Bernhard Nessler, Michael Pfeiffer and Wolfgang Maass
In: NIPS 2008, 8 Dec - 11 Dec 2008, Vancouver, Canada.

Abstract

When we perceive our environment, make a decision, or take an action, our brain has to deal with multiple sources of uncertainty. The Bayesian framework of statistical estimation provides computational methods for dealing optimally with uncertainty. Bayesian inference however is algorithmically quite complex, and learning of Bayesian inference involves the storage and updating of probability tables or other data structures that are hard to implement in neural networks. Hence it is unclear how our nervous system could acquire the capability to approximate optimal Bayesian inference and action selection. This article shows that the simplest and experimentally best supported type of synaptic plasticity, Hebbian learning, can in principle achieve this. Even inference in complex Bayesian networks can be approximated by Hebbian learning in combination with population coding and lateral inhibition ("Winner-Take-All'') in cortical microcircuits that produce a sparse encoding of complex sensory stimuli. We also show that a corresponding reward-modulated Hebbian plasticity rule provides a principled framework for understanding how Bayesian inference could support fast reinforcement learning in the brain. In particular we show that recent experimental results by Yang and Shadlen on reinforcement learning of probabilistic inference in primates can be modeled in this way.

EPrint Type:Conference or Workshop Item (Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
Learning/Statistics & Optimisation
Theory & Algorithms
ID Code:5378
Deposited By:Michael Pfeiffer
Deposited On:31 March 2009