Paper Image

Continual learning through generative models

Published on:

21 December 2023

Primary Category:

Machine Learning

Paper Authors:

Kamil Deja,

Bartosz Cywiński,

Jan Rybarczyk,

Tomasz Trzciński


Key Details

Proposes Adapt & Align method to align latent spaces of generative models for continual learning without forgetting

Splits training into local model learning new data, and global model consolidating all data

Achieves forward and backward transfer between tasks

Applies approach to VAEs, GANs, and classification tasks

Outperforms prior state-of-the-art techniques

AI generated summary

Continual learning through generative models

This paper introduces a new method called Adapt & Align for training neural networks to continually learn from new data distributions without forgetting previous knowledge. It works by splitting the training process into two phases - first, a local generative model (like a VAE or GAN) learns representations of just the new data; second, these representations are aligned and consolidated into a global model encoding all past data. Compared to prior techniques, this better enables forward transfer (improved performance on new tasks thanks to past knowledge) and backward transfer (improved reconstruction of old data when exposed to new, similar data). The authors demonstrate the approach on image datasets with VAEs and GANs, and show how the aligned generative representations can be leveraged for downstream tasks like classification.

Answers from this paper


No comments yet, be the first to start the conversation...

Sign up to comment on this paper

Sign Up