2 November 2023
Computer Vision and Pattern Recognition
Proposes selective plasticity approach to identify and retain transferable network parts when transitioning between visual tasks
Inspired by event models in the brain that update at boundaries
Leverages redundancy in contrastive embeddings to regularize only parts that transfer well
Achieves SOTA on CIFAR10, TinyImagenet and Rotated MNIST in various incremental scenarios
Selective plasticity for continual visual learning
This paper proposes a new approach to continual learning of visual tasks that identifies and retains the most transferable parts of a neural network's representations when transitioning between tasks. It is inspired by the brain's event models, which update mainly at event boundaries. The method leverages redundancy in contrastively learned embeddings to regularize only parts that perform well on the first batch of new task data. This identifies parameters important for transfer, freeing up the rest of the network to learn new features. Evaluated on CIFAR10, TinyImagenet and Rotated MNIST, the approach achieves state-of-the-art performance in task, class and domain incremental scenarios.
No comments yet, be the first to start the conversation...
Sign up to comment on this paper