The task of item recommendation requires ranking a large catalogue of items given a context. Item recommendation algorithms,are evaluated using ranking metrics that depend on the positions of,relevant items. To speed up the computation of metrics, recent work,often uses sampled metrics where only a smaller set of random,items and the relevant items are ranked. This paper investigates,sampled metrics in more detail and shows that they are inconsistent with their exact version, in the sense that they do not persist,relative statements, e.g.,,recommender A is better than B,, not even,in expectation. Moreover, the smaller the sampling size, the less,difference there is between metrics, and for very small sampling,size, all metrics collapse to the AUC metric. We show that it is,possible to improve the quality of the sampled metrics by applying,a correction, obtained by minimizing different criteria such as bias,or mean squared error. We conclude with an empirical evaluation of,the naive sampled metrics and their corrected variants. To summarize, our work suggests that sampling should be avoided for metric,calculation, however if an experimental study needs to sample, the,proposed corrections can improve the quality of the estimate.
Authors: Walid Krichene & Steffen Rendle @ Google Research
Paper Url: https://dl.acm.org/doi/pdf/10.1145/3394486.3403226