Skip to Main content Skip to Navigation
Conference papers

Thompson Sampling for one-dimensial exponential family bandits

Nathaniel Korda 1 Emilie Kaufmann 2, 3 Rémi Munos 1 
1 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
2 S2A - Signal, Statistique et Apprentissage
LTCI - Laboratoire Traitement et Communication de l'Information
Abstract : Thompson Sampling has been demonstrated in many complex bandit models, however the theoretical guarantees available for the parametric multi-armed bandit are still limited to the Bernoulli case. Here we extend them by proving asymptotic optimality of the algorithm using the Jeffreys prior for one-dimensional exponential family bandits. Our proof builds on previous work, but also makes extensive use of closed forms for Kullback-Leibler divergence and Fisher information (through the Jeffreys prior) available in an exponential family. This allow us to give a finite time exponential concentration inequality for posterior distributions on exponential families that may be of interest in its own right. Moreover our analysis covers some distributions for which no optimistic algorithm has yet been proposed, including heavy-tailed exponential families.
Complete list of metadata
Contributor : TelecomParis HAL Connect in order to contact the contributor
Submitted on : Saturday, September 14, 2019 - 6:46:55 PM
Last modification on : Thursday, January 20, 2022 - 4:16:37 PM


  • HAL Id : hal-02288407, version 1


Nathaniel Korda, Emilie Kaufmann, Rémi Munos. Thompson Sampling for one-dimensial exponential family bandits. NIPS 2013 - Neural Information Processing Systems Conference, Dec 2013, Lake Tahoe, United States. ⟨hal-02288407⟩



Record views