Paper Image

How linguistic knowledge emerges in language models

Published on:

25 October 2023

Primary Category:

Computation and Language

Paper Authors:

Max Müller-Eberstein,

Rob van der Goot,

Barbara Plank,

Ivan Titov

Bullets

Key Details

Syntax is acquired rapidly after only 0.5% of training

Later gains stem from open-domain knowledge and contextualization

Semantics and reasoning require more data and training

Related tasks share information, especially during critical learning

AI generated summary

How linguistic knowledge emerges in language models

This paper analyzes how different types of linguistic knowledge emerge and interact during language model pre-training. Using information-theoretic probes, the authors track the development of syntactic, semantic, and reasoning capabilities over 2 million training steps. They find distinct learning phases where knowledge is acquired rapidly, then refined through contextualization and open-domain learning. Syntax emerges early, while semantics and reasoning require more data. Throughout training, related tasks share information, which could inform multi-task learning.

Answers from this paper

Comments

No comments yet, be the first to start the conversation...

Sign up to comment on this paper

Sign Up