Novel methods for probabilistic inference and learning in spiking neural networks
Spiking neural networks constitute the third generation of artificial neural networks. They differ from traditional neural network models by respecting biological networks in greater detail and mapping their behavior more accurately. One of the main research goals in the field of spiking neural networks is to elucidate connections between structure and function. Recent research in cognitive psychology and neuroscience points to an instrumental role of probabilistic reasoning and learning in human and animal behavior. The objective of this thesis was to investigate the ability of certain architectures of spiking neural networks to perform probabilistic inference and learning, either exact or approximate. To this end, a recently proposed architecture which has established a link between the Expectation-Maximization (EM) algorithm and neural mechanisms was extended and generalized. A rigorous theory was developed which relates activity and plasticity in a special class of spiking neural networks to an online approximation of EM. This was achieved by mapping a probabilistic mixture model with component distributions from exponential families to a neural architecture. Inference and learning were demonstrated to correspond to neural integration with lateral inhibition and Hebbian plasticity, or more precisely Spike-Timing Dependent Plasticity (STDP). The method proposed in this thesis is consistent with biological data in important aspects and complements recent discoveries in the area, by providing a theoretical explanation of the emergence of a particular function in certain classes of spiking neural networks.