PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Concepts and methods from machine learning as tools for the analysis of computations in nervous systems
Michael Pfeiffer
(2010) PhD thesis, Graz University of Technology.

Abstract

This thesis investigates how innovative machine learning methods, which autonomously extract information from data, can be used to gain insights into neural information processing. The brain provides a framework which has been optimized through evolution to support fast and robust adaptation to the environment, thereby increasing chances of survival. Building upon the mathematically framework of machine learning allows us to study the role of experimentally observed synaptic learning phenomena, or to use analogies from neuroscience in order to improve machine learning algorithms for difficult real-world tasks. My dissertation is structured into several parts, which highlight the multiple possibilities where machine learning and computational neuroscience can fruitfully interact. The first part studies auditory information processing by insects in real-world scenarios. Analyzing recordings that were performed in the natural habitats of the insects in the tropical rainforest, we find that neural coding with characteristic burst firing patterns provides a reliable way of transmitting information in situations where signals are heavily distorted by environmental noise. The second part uses neural network techniques to construct models of high-level human behavior. The relevance of text that humans were reading was classified from the movements of their eyes. Our approach was so successful that it finished first in an international competition. In the third part a new algorithm for reward-based learning in continuous state- and action spaces is presented, which draws inspiration from neuroscientific concepts of motor control. In combination with sample-based models and innovative exploration policies this leads to an improvement over existing algorithms, which are applicable for robotics tasks. The final parts of my thesis links biologically plausible Hebbian learning mechanisms to mathematical concepts of learning and decision making. Neural network models are presented in which simple synaptic plasticity rules with strong convergence guarantees lead to approximately optimal decisions in a Bayesian sense. This shows how nervous systems can learn strategies for a rich variety of tasks with apparently very simple and limited basic units of computations, i.e. neurons and synapses. The presented approach makes concrete predictions for sparse, redundant neural codes for input signals, with which Hebbian learning can quickly and robustly lead to sensible decisions. We first present mechanisms for supervised learning, and extend these rules to reward-modulated learning in winner-take-all networks, where action selection policies are learned from rewards and punishments. Finally, we explore functional roles for spike-timing dependent plasticity (STDP) in soft winner-take-all circuits of spiking neurons. It is shown that spiking neurons can learn implicit internal models of high-dimensional input signals without supervision, thereby identifying hidden causes of inputs. In particular, it is shown that STDP is able to approximate a stochastic online Expectation-Maximization algorithm for modeling the input data.

EPrint Type:Thesis (PhD)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
User Modelling for Computer Human Interaction
Learning/Statistics & Optimisation
Brain Computer Interfaces
Theory & Algorithms
ID Code:6083
Deposited By:Michael Pfeiffer
Deposited On:08 March 2010