PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

Invariant Gaussian Process Latent Variable Models and Application in Causal Discovery
Kun Zhang, Bernhard Schölkopf and Dominik Janzing
In: 26th Conference on Uncertainty in Artificial Intelligence (UAI 2010), 8-11 July 2010, Catalina Island, California, USA.

Abstract

In nonlinear latent variable models or dynamic models, if we consider the latent variables as confounders (common causes), the noise dependencies imply further relations between the observed variables. Such models are then closely related to causal discovery in the presence of nonlinear confounders, which is a challenging problem. However, generally in such models the observation noise is assumed to be independent across data dimensions, and consequently the noise dependencies are ignored. In this paper we focus on the Gaussian process latent variable model (GPLVM), from which we develop an extended model called invariant GPLVM (IGPLVM), which can adapt to arbitrary noise covariances. With the Gaussian process prior put on a particular transformation of the latent nonlinear functions, instead of the original ones, the algorithm for IGPLVM involves almost the same computational loads as that for the original GPLVM. Besides its potential application in causal discovery, IGPLVM has the advantage that its estimated latent nonlinear manifold is invariant to any nonsingular linear transformation of the data. Experimental results on both synthetic and realworld data show its encouraging performance in nonlinear manifold learning and causal discovery.

EPrint Type:Conference or Workshop Item (Paper)
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Computational, Information-Theoretic Learning with Statistics
Learning/Statistics & Optimisation
Brain Computer Interfaces
Theory & Algorithms
ID Code:7792
Deposited By:Bernhard Schölkopf
Deposited On:17 March 2011