Comparator-Adaptive Convex Bandits

NeurIPS 2020

Comparator-Adaptive Convex Bandits

Dec 06, 2020
|
35 views
|
Details
We study bandit convex optimization methods that adapt to the norm of the comparator, a topic that has only been studied before for its full-information counterpart. Specifically, we develop convex bandit algorithms with regret bounds that are small whenever the norm of the comparator is small. We first use techniques from the full-information setting to develop comparator-adaptive algorithms for linear bandits. Then, we extend the ideas to convex bandits with Lipschitz or smooth loss functions, using a new single-point gradient estimator and carefully designed surrogate losses. Speakers: Dirk Van Der Hoeven, Ashok Cutkosky, Haipeng Luo

Comments
loading...