Disentangled Self-Supervision in Sequential Recommenders - CrossMinds.ai
Disentangled Self-Supervision in Sequential Recommenders
Aug 13, 20202 views
Jianxin Ma
To learn a sequential recommender, the existing methods typically,adopt the sequence-to-item (seq2item) training strategy, which supervises a sequence model with a user’s next behavior as the label,and the user’s past behaviors as the input. The seq2item strategy,,however, is myopic and usually produces non-diverse recommendation lists. In this paper, we study the problem of mining extra signals,for supervision by looking at the longer-term future. There exist two,challenges: i) reconstructing a future sequence containing many,behaviors is exponentially harder than reconstructing a single next,behavior, which can lead to difficulty in convergence, and ii) the sequence of all future behaviors can involve many intentions, not all,of which may be predictable from the sequence of earlier behaviors.,To address these challenges, we propose a sequence-to-sequence,(seq2seq) training strategy based on latent self-supervision and,disentanglement. Specifically, we perform self-supervision in the,latent space, i.e., reconstructing the representation of the future sequence as a whole, instead of reconstructing the items in the future,sequence individually. We also disentangle the intentions behind,any given sequence of behaviors and construct seq2seq training,samples using only pairs of sub-sequences that involve a shared,intention. Results on real-world benchmarks and synthetic data,demonstrate the improvement brought by seq2seq training.
SIGKDD_2020
Recommended