PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

A learning rule for very simple universal approximators consisting of a single layer of perceptrons
Peter Auer, Harald Burgsteiner and Wolfgang Maass
(2005) Working Paper. no.

There is a more recent version of this eprint available. Click here to view it.

Abstract

One may argue that the simplest type of neural networks beyond a single perceptron is an array of several perceptrons in parallel. We refer to such circuits as parallel perceptrons. In spite of their simplicity, these circuits can compute any boolean function if one views the majority of the binary perceptron outputs as the binary output of the parallel perceptron, and they are universal approximators for arbitrary continuous functions with values in [0,1] if one views the fraction of perceptrons that output 1 as the analog output of the parallel perceptron. For a long time one has thought that there exists no competitive learning algorithm for these extremely simple neural networks, which also became known as committee machines. It is commonly assumed that one has to replace the hard threshold gates by sigmoidal gates (or RBF-gates) and that one has to tune the weights on at least two successive layers in order to achieve satisfactory learning results for any class of neural networks that yield universal approximators. We show that this assumption (which apparently had motivated the widespread use of the backprop learning algorithms) is not true, by exhibiting a simple learning algorithm for parallel perceptrons -- the parallel delta rule (p-delta rule). In contrast to backprop for multi-layer perceptrons, the p-delta rule only has to tune a single layer of weights, and it does not require the computation and communication of analog values with high precision. Reduced communication also distinguishes our new learning rule from other learning rules for parallel perceptrons such as MADALINE. Obviously these features make the $p$-delta rule attractive as a biologically more realistic alternative to backprop in biological neural circuits, but also for implementations in special purpose hardware. We show that the p-delta rule also implements gradient descent -- with regard to a suitable error measure -- although it does not require to compute derivatives. Furthermore it is shown through experiments on common real-world benchmark datasets that its performance is competitive with that of other learning approaches from neural networks and machine learning. It has recently been shown [Anthony:03, Anthony:04] that one can also prove quite satisfactory bounds for the generalization error of this new learning rule.

PDF - PASCAL Members only - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type:Monograph (Working Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
Theory & Algorithms
ID Code:1875
Deposited By:Peter Auer
Deposited On:29 December 2005

Available Versions of this Item