Learning temporal relationships between hidden causes in networks of spiking neurons
Spike time dependent plasticity (STDP) is considered to be the major mechanism for learning and adaptive behaviour in the brain. In a recent work it was shown that a purely local synaptic learn rule enables a network of spiking neurons, connected to winner-take-all circuits, to discover hidden causes. These findings suggest, that neural networks in the neocortex and the hypocampus are able to compute Bayesian inference, a field that is well studied in machine learning. However, temporal correlations of the input were neglected by assuming consecutive hidden causes to be independent. Natural signals, like speech, show high correlations between nearby time windows and it is likely, that evolution has found some way to exploit these correlation in biological neural networks. In this thesis we extend the basic results on winner-take-all circuits to enable them to detect temporal relationships between hidden causes from an input spike stream. We will show that this goal can be achieved by using relatively simple extensions of the basic architecture. We will investigate the learning dynamics of such networks, and compare the results to recent data from neuroscience, and to standard machine learning paradigms such as the hidden Markov model. We study different network architectures and analyse them in terms of their computational power and biological plausibility.