Symbolic Dynamic Programming for First-order POMDPs
Scott Sanner and Kristian Kersting
In: Proceedings of the 24th AAAI Conference on Artificial Intelligence (AAAI-10)(2010).
Partially-observable Markov decision processes (POMDPs) provide a powerful model for sequential decision-making problems with partially-observed state and are known to have (approximately) optimal dynamic programming solutions. Much work in recent years has focused on improving the efﬁciency of these dynamic programming algorithms by exploiting symmetries and factored or relational representations. In this work, we show that it is also possible to exploit the full expressive power of ﬁrst-order quantiﬁcation to achieve state, action, and observation abstraction in a dynamic programming solution to relationally speciﬁed POMDPs. Among the advantages of this approach are the ability to maintain compact value function representations, abstract over the space of potentially optimal actions, and automatically derive compact conditional policy trees that minimally partition relational observation spaces according to distinctions that have an impact on policy values. This is the ﬁrst lifted relational POMDP solution that can optimally accommodate actions with a potentially inﬁnite relational space of observation outcomes.