Paper Image

Implicit biases in neural network optimization

Published on:

1 November 2023

Primary Category:

Machine Learning

Paper Authors:

Benoit Dherin

Bullets

Key Details

Backward error analysis reveals implicit biases in neural network training

Additional flatness regularization terms appear, beneficial for generalization

A conflict term emerges in multitask learning, related to misaligned gradients

In continual learning, a conflict term related to catastrophic forgetting arises

This term involves the Lie bracket of task gradients, a tool from differential geometry

AI generated summary

Implicit biases in neural network optimization

This paper analyzes the implicit biases present during neural network training with stochastic gradient descent. Using backward error analysis, the authors derive modified loss functions that reveal beneficial flatness regularization terms as well as detrimental conflict terms between task gradients, shedding light on optimization challenges like catastrophic forgetting.

Answers from this paper

Comments

No comments yet, be the first to start the conversation...

Sign up to comment on this paper

Sign Up