Sublinear Optimal Policy Value Estimation in Contextual Bandits
CrossMind.ai logo

Sublinear Optimal Policy Value Estimation in Contextual Bandits

Jan 18, 2021
|
37 views
|
Details
Abstract: We study the problem of estimating the expected reward of the optimal policy in the stochastic disjoint linear bandit setting. We prove that for certain settings it is possible to obtain an accurate estimate of the optimal policy value even with a number of samples that is sublinear in the number that would be required to \emph{find} a policy that realizes a value close to this optima. We establish nearly matching information theoretic lower bounds, showing that our algorithm achieves near optimal estimation error. Finally, we demonstrate the effectiveness of our algorithm on joke recommendation and cancer inhibition dosage selection problems using real datasets Authors: Weihao Kong, Gregory Valiant, Emma Brunskill (University of Washington, Stanford University)

Comments
loading...