The field of continual learning is moving towards innovative solutions that address the stability-plasticity dilemma, with a focus on developing frameworks that enable neural networks to learn and adapt incrementally. Recent research has emphasized the importance of architectural perspectives, progressive neural collapse, and dual-adapter architectures in achieving this goal. Additionally, there is a growing interest in rethinking the role of pre-trained models and foundation models in continual learning, with approaches such as adapting pre-trained models before the core continual learning process and leveraging neural network reprogrammability. Noteworthy papers include Rethinking Continual Learning with Progressive Neural Collapse, which introduces a novel framework that completely removes the need for a fixed global ETF in CL, and CL-LoRA, which proposes a dual-adapter architecture that combines task-shared and task-specific adapters to learn cross-task knowledge and capture unique features of each new task.