Computation and Learning in Biological Networks of Neurons: Theoretical Analysis, Computer Simulations, and Analysis of Experimental Data
Computations in the brain differ fundamentally from those in traditional digital computers. Most notably, the brain is organized in a massively parallel manner and has the ability to learn. The Liquid State Machine has emerged as a powerful model that provides a framework for explaining computation and learning in biological networks of neurons: Recurrent networks of spiking neurons can serve as generic preprocessing units that allow simple, typically linear readout neurons of these networks to be adapted for complex computational tasks. This thesis makes important contributions to this framework in two ways. It investigates a number of unsupervised learning algorithms, which are potential candidates for such readout mechanisms, and provides novel experimental evidence for this computing model using data from the primary auditory cortex of awake ferrets. First, it is shown how two unsupervised learning mechanisms, information bottleneck optimization and independent component analysis, can in principle be implemented using biologically realistic neuron models by deriving suitable learning rules from these abstract information theoretic principles. The resulting learning rules are analyzed theoretically and tested in a number of computer simulations. Second, slow feature analysis is investigated as another unsupervised learning principle. A theoretical analysis shows that under some conditions on the statistics of the input time series it is able to achieve the classification capability of a well-known supervised learning method, Fisher's linear discriminant. Furthermore, readouts of a computer model of a cortical microcircuit trained with this method are able to learn to detect repeating firing patterns within a stream of spike trains with the same firing statistics and to discriminate between the network responses to different stimuli in a completely unsupervised manner. Finally, biological data from neurons in the primary auditory cortex of ferrets are analyzed using state-of-the-art methods from machine learning and information theory. It is shown that sequentially arriving stimulus information is integrated over time and superimposed in a non-linear way into the neural responses at one point in time. This provides thus experimental evidence for the liquid computing model, for the first time using data from awake animals.