Grammatical Inference as a Principal Component Analysis Problem
Raphael Bailly, François Denis and Liva Ralaivola
Proceedings of the 26th International COnference on Machine Learning 2009.

## Abstract

One of the main problems in probabilistic grammatical inference consists in inferring a stochastic language, i.e. a probability distribution, in some class of probabilistic models, from a sample of strings independently drawn according to a fixed unknown target distribution p. Here, we consider the class of rational stochastic languages composed of stochastic languages that can be computed by multiplicity automata, which can be viewed as a generalization of probabilistic automata. Rational stochastic languages p have a useful algebraic characterization: all the mappings $v\rightarrow p(uv)$ lie in a finite dimensional vector subspace $V_p^*$ of the vector space composed of all real-valued functions defined over $\Sigma^*$. Hence, a first step in the grammatical inference process can consist in identifying the subspace $V_p^*$. In this paper, we study the possibility of using Principal Component Analysis to achieve this task. We provide an inference algorithm which computes an estimate of this space and then build a multiplicity automaton which computes an estimate of the target distribution. We prove some theoretical properties of this algorithm and we provide results from numerical simulations that confirm the relevance of our approach.

 PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type: Article Project Keyword UNSPECIFIED Learning/Statistics & Optimisation 5708 François Denis 08 March 2010