Learning Languages from Bounded Resources: the Case of the DFA and the Balls of Strings
Comparison of language learning paradigms has always been a complex question. There are a number of established paradigms to study the learnability of classes of languages: identification in the limit, query learning, probably approximately correct (Pac) learning. Moreover, when to the question of converging to a target one adds computational constraints, the picture becomes even less clear: how much do queries or negative examples help? Can we find good algorithms that change their minds very little or that make very few errors? In order to approach these problems we concentrate here on two classes of languages, the one of deterministic finite automata (DFA) and the one of topological balls of strings (for the edit distance), and (re-)visit the different learning paradigms to sustain our claims.