The Online Loop-free Stochastic Shortest-Path Problem
Gergely Neu, Andras Gyorgy and Csaba Szepesvari
In: Proceedings of the 23rd Annual Conference on Learning Theory (2010) OmniPress , pp. 231-243.

## Abstract

We consider a stochastic extension of the loop-free shortest path problem with adversarial rewards. In this episodic Markov decision problem an agent traverses through an acyclic graph with random transitions: at each step of an episode the agent chooses an action, receives some reward, and arrives at a random next state, where the reward and the distribution of the next state depend on the actual state and the chosen action. We consider the bandit situation when only the reward of the just visited state-action pair is revealed to the agent. For this problem we develop algorithms that perform asymptotically as well as the best stationary policy in hindsight. Assuming that all states are reachable with probability $\alpha> 0$ under all policies, we give an algorithm and prove that its regret is O(L^2\sqrt{T|A|}/\alpha), where T is the number of episodes, A denotes the (finite) set of actions, and L is the length of the longest path in the graph. Variants of the algorithm are given that improve the dependence on the transition probabilities under specific conditions. The results are also extended to variations of the problem, including the case when the agent competes with time varying policies.

EPrint Type: Book Section Project Keyword UNSPECIFIED Learning/Statistics & OptimisationTheory & Algorithms 6981 Andras Gyorgy 05 August 2010