Improved Optimistic Algorithms for Logistic Bandits - Département Image, Données, Signal Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Improved Optimistic Algorithms for Logistic Bandits

Résumé

The generalized linear bandit framework has attracted a lot of attention in recent years by extending the well-understood linear setting and allowing to model richer reward structures. It notably covers the logistic model, widely used when rewards are binary. For logistic bandits, the frequentist regret guarantees of existing algorithms areÕ(κ √ T), where κ is a problem-dependent constant. Unfortunately, κ can be arbitrarily large as it scales exponentially with the size of the decision set. This may lead to significantly loose regret bounds and poor empirical performance. In this work, we study the logistic bandit with a focus on the prohibitive dependencies introduced by κ. We propose a new optimistic algorithm based on a finer examination of the non-linearities of the reward function. We show that it enjoys aÕ(√ T) regret with no dependency in κ, but for a second order term. Our analysis is based on a new tail-inequality for self-normalized martingales, of independent interest.
Fichier principal
Vignette du fichier
supp.pdf (531.81 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02932836 , version 1 (07-09-2020)

Identifiants

  • HAL Id : hal-02932836 , version 1

Citer

Louis Faury, Marc Abeille, Clément Calauzènes, Olivier Fercoq. Improved Optimistic Algorithms for Logistic Bandits. International Conference of Machine Learning, Jul 2020, Vienne, Austria. ⟨hal-02932836⟩
139 Consultations
33 Téléchargements

Partager

Gmail Facebook X LinkedIn More