Abstract: Following the progress and comparing Continual Learning methods within the literature is quite difficult. We propose to standardize and organize research problems (Settings) as well as the solutions to such problems (Methods), by first reformulating settings according to their differences in assumptions, and by organizing them into a structured hierarchy, in the form of a tree. This allows Methods to be defined in terms of a target Setting, allowing them to be directly reused, for instance, across both the Continual Supervised Learning (CSL) and Continual Reinforcement Learning (CRL) domains. We propose a first instantiation of this idea in the form of a publicly available framework - Sequoia, which can be used to perform a vast variety of training and evaluation protocols from both CRL and CSL, as well as provide a suite of modular, reusable baselines which are easy to extend and customize. Our hope is that this idea, as well as its implementation, can serve toward the unification and acceleration of research in this field and beyond.
Authors: Fabrice Normandin and Massimo Caccia (MILA)