Paper Image

Impact of prompt syntax on knowledge retrieval from language models

Paper Authors:

Stephan Linzbach,

Dimitar Dimitrov,

Laura Kallmeyer,

Kilian Evang,

Hajira Jabeen,

Stefan Dietze

Bullets

Key Details

Clausal syntax prompts retrieve knowledge more consistently than appositive syntax

Range information boosts knowledge retrieval more than domain information

Models decrease response uncertainty more with clausal syntax prompts

RoBERTa combines supplementary information most reliably with clausal syntax

All models perform better overall with clausal syntax prompts

AI generated summary

Impact of prompt syntax on knowledge retrieval from language models

This paper introduces a new benchmark, CONPARE-LAMA, to analyze how subtle differences in prompt syntax and semantics affect the ability of language models to accurately retrieve relational knowledge. Controlled experiments reveal that clausal syntax prompts more consistently retrieve knowledge, combine supplementary information efficiently, and reduce response uncertainty compared to appositive syntax.

Answers from this paper

Comments

No comments yet, be the first to start the conversation...

Sign up to comment on this paper

Sign Up