## AbstractIn this thesis top-down and bottom-up methods are applied in the study of two central questions regarding the function of neural circuits in the brain: what type of computations they implement and how learning on a synaptic level yields useful computational functions on a circuit and behavioral level. Motivated by the need to best support the neural modelling and simulation requirements of the research done in this thesis and other related work, a novel software framework for neural simulations named PCSIM was developed, which is another additional contribution of this thesis. Probabilistic inference in graphical models has been often proposed as a suitable candidate framework for explaining the computations that the brain carries out, but the neural basis of these computations remains unclear. In chapter 2 this problem is approached, and several different possible implementations of probabilistic inference in graphical models with networks of spiking neurons are presented. The developed neural implementations perform probabilistic inference through Markov chain Monte Carlo sampling and use specific network structures or dendritic computations in biologically realistic neurons as basic building blocks to realize the required nonlinear computational operations. Hence, they propose that the computational function of local network motifs as well as the dendritic computations in single neurons is to support the probabilistic inference operations on a larger network level. In chapter 3 it is analysed theoretically and through computer simulations what computations can be learned with reward-modulated spike-timing-dependent plasticity, a synaptic plasticity learning rule based on experimental findings about long-term synaptic efficacy changes. In particular, it is shown that this plasticity rule enables spiking neurons to learn classification of temporal spike patterns. It is also shown that neurons can learn with this rule a specific mapping from input spike patterns to output spike patterns. Moreover, it is analysed under which conditions and parameters values for the learning rule and the neuron model the learning in these learning tasks is successful. Finally, it is also demonstrated that reward-modulated STDP can explain experimental results on biofeedback learning in monkeys. Chapter 4 gives an overview of the Parallel neural Circuit SIMulator (PCSIM) with a focus on its integration with the Python programming language. PCSIM is a neural simulation environment intended for simulation of spiking and analog neural networks with a support for distributed simulation of large-scale neural networks on multiple machines. In this chapter key features of PCSIM's modular and extensible object-oriented framework and user interface are outlined and it is described how these features enable the user to develop and construct neural models easier and faster, to speed up the simulations of the models, and to add easily custom extensions to the PCSIM framework. Further, benefits from the integration of PCSIM with Python are elucidated.
[Edit] |