PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

What can a Neuron Learn with Spike-Timing-Dependent Plasticity?
Robert Legenstein, Christian Näger and Wolfgang Maass
Neural Computation Volume 17, Number 11, pp. 2337-2382, 2005.

Abstract

Spiking neurons are very flexible computational modules, which can implement with different values of their adjustable synaptic parameters an enormous variety of different transformations F from input spike trains to output spike trains. We examine in this article the question to what extent a spiking neuron with biologically realistic models for dynamic synapses can be taught via spike-timing-dependent plasticity (STDP) to implement a given transformation F. We consider a supervised learning paradigm here during training the output of the neuron is clamped to the target signal (teacher forcing). The well-known Perceptron Convergence Theorem asserts the convergence of a simple supervised learning algorithm for drastically simplified neuron models (McCulloch-Pitts neurons). We show that in contrast to the Perceptron Convergence Theorem no theoretical guarantee can be given for the convergence of STDP with teacher forcing that holds for arbitrary input spike patterns. On the other hand we prove that average case versions of the Perceptron Convergence Theorem hold for STDP in the case of uncorrelated and correlated Poisson input spike trains and simple models for spiking neurons. For a wide class of cross-correlation functions of the input spike trains the resulting necessary and sufficient condition can be formulated in terms of linear separability, analogously as the well-known condition of learnability by perceptrons. However the linear separability criterion has to be applied here to the columns of the correlation matrix of the Poisson input. We demonstrate through extensive computer simulations that the theoretically predicted convergence of STDP with teacher forcing also holds for more realistic models for neurons, for dynamic synapses, and for more general input distributions. In addition we show through computer simulations that these positive learning results not only hold for the common interpretation of STDP where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses.

PDF - PASCAL Members only - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Article
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Learning/Statistics & Optimisation
ID Code:1263
Deposited By:Wolfgang Maass
Deposited On:28 November 2005