Paper Image

Selective plasticity for continual visual learning

Published on:

2 November 2023

Primary Category:

Computer Vision and Pattern Recognition

Paper Authors:

Rouzbeh Meshkinnejad,

Jie Mei,

Daniel Lizotte,

Yalda Mohsenzadeh

Bullets

Key Details

Proposes selective plasticity approach to identify and retain transferable network parts when transitioning between visual tasks

Inspired by event models in the brain that update at boundaries

Leverages redundancy in contrastive embeddings to regularize only parts that transfer well

Achieves SOTA on CIFAR10, TinyImagenet and Rotated MNIST in various incremental scenarios

AI generated summary

Selective plasticity for continual visual learning

This paper proposes a new approach to continual learning of visual tasks that identifies and retains the most transferable parts of a neural network's representations when transitioning between tasks. It is inspired by the brain's event models, which update mainly at event boundaries. The method leverages redundancy in contrastively learned embeddings to regularize only parts that perform well on the first batch of new task data. This identifies parameters important for transfer, freeing up the rest of the network to learn new features. Evaluated on CIFAR10, TinyImagenet and Rotated MNIST, the approach achieves state-of-the-art performance in task, class and domain incremental scenarios.

Answers from this paper

Comments

No comments yet, be the first to start the conversation...

Sign up to comment on this paper

Sign Up