Analysis of Probabilistic Inference in Recurrent Networks of Spiking Neurons
The investigation of neural network models has two different major goals: to help people to infer the processes going on in real biological brains and to solve artificial intelligence problems. Over the last decades and years the evolution of these network structures lead to improvements in both - understanding the causes and effects of brain processes, and performing well in various problems of different fields. The networks studied in this thesis, i.e., spiking neural networks, belong to the so-called third generation of neural network models: spiking neural networks, being characterized by the addition of the aspect of time as spikes. Recently a novel method for training spiking neural networks has been developed, called SEM. Furthermore, an implicit generative model (i.e., a probability model) has been found, allowing to perform probabilistic inference on the basis of well-defined statistical mechanisms, and in this way to draw samples from networks of spiking neurons. The first goal of this work is to implement this newly discovered mechanism in a highly flexible and modular fashion. An efficient object oriented C++ framework is developed providing different interfaces on various levels of abstraction to more high-level programming languages. In a second step the mathematically exact neural network model is slightly modified using biologically more realistic neuronal dynamics like synaptic delay. Such modifications are systematically analyzed for their (approximate) correctness: It is explored up to which level of “realism” the model still provides reliable results. This is done by comparing the systematic error of modifying the parameters with the finite sampling time (and the so caused stochastic error) present anyways. Thus, in limited-time situations the cost of non-exact sampling might remain relatively low and might thus be not relevant for a behaving organism. Additionally the neural network model is applied to a simplified model of vision, more precisely to pattern or image recognition and completion. In this context, the neural dynamics sampler is applied to a multilayered network of spiking neurons, representing a generative model. This network is trained on certain image patterns using maximum likelihood learning and then tested on similar but incomplete images which it is able to complete, using feedback information, after having learned and adapted the appropriate synapses during the autonomous learning period. The outcomes of this simulation are compared to findings of biological experiments in the visual cortical system, showing that neural sampling can successfully be applied to simulate some of the aspects found in biological experiments.