Online Regret Bounds for Markov Decision Processes with Deterministic Transitions
We consider an upper confidence bound algorithm for Markov decision processes (MDPs) with deterministic transitions. For this algorithm we derive upper bounds on the online regret (with respect to an (eps-)optimal policy) that are logarithmic in the number of steps taken. We also present a matching lower bound. As an application, multi-armed bandits with switching cost are considered.