Biological and Functional Models
of Learning in Networks of
PhD thesis, Graz University of Technology.
Neural circuits generally process information in a massively parallel way and exhibit a communication between the constituent units based on spikes, i.e. binary events, therefore differing fundamentally from many artificial information processing and learning systems. In such neural circuits, synaptic plasticity is widely considered to be the main biophysical correlate of learning. This thesis investigates synaptic plasticity and learning in neural networks with the help of datadriven, i.e. “bottom-up”, and theory-driven, i.e. “top-down”, models and focuses in particular on the implications of their distributed architecture and spike-based communication for learning.
In Chapter 2, a novel model of experimental data on synaptic plasticity, unifying multiple previous models, is presented. The proposed model is able to reproduce the experimentally observed effect of spike timing-dependent plasticity as well as plasticity effects parametrized by postsynaptic firing rate or depolarization. Chapters 3 and 4 propose learning rules that enable spiking neurons to perform clustering of input data taking into account side-information. These rules, which implement the Information Bottleneck method in spiking networks, are designed to operate in distributed architectures exclusively using biologically plausible communication
mechanisms. In Chapter 5 and 6, the capabilities of recurrent neural networks as multipurpose preprocessors for diverse learning problems are studied. In this context, essential differences between spiking and non-spiking network models are revealed especially with respect to the influence of the network connectivity statistics on the preprocessing capabilities.