Continual Learning and Domain Generalization

The field of continual learning is moving towards addressing the challenges of domain incremental learning and domain generalization. Researchers are exploring new approaches to enable models to learn from sequential tasks and generalize to diverse, unseen scenarios. One of the key directions is the development of methods that can disentangle semantic- and domain-relevant information in representation learning, allowing for more robust generalization. Another important area of research is the use of metalearning and transformers to adapt to new tasks using only a small number of in-context examples. Noteworthy papers in this area include: Continual Knowledge Consolidation LORA for Domain Incremental Learning, which proposes a novel approach to continual knowledge consolidation and achieves state-of-the-art results on several benchmark problems. Domain Generalizable Continual Learning, which introduces a new setting for domain generalizable continual learning and proposes an innovative approach to adaptively transform task representations across various domains. Transformers are almost optimal metalearners for linear classification, which provides a theoretical analysis of transformers' ability to act as metalearners in a linear classification setting. Separating the what and how of compositional computation to enable reuse and continual learning, which proposes a novel two-system approach to continual learning and compositional reuse of learned computations.

Sources

Continual Knowledge Consolidation LORA for Domain Incremental Learning

Domain Generalizable Continual Learning

Transformers are almost optimal metalearners for linear classification

Separating the what and how of compositional computation to enable reuse and continual learning

Built with on top of