Published on:
1 July 2023
Primary Category:
Computation and Language
Paper Authors:
Anirudh Ajith,
Chris Pan,
Mengzhou Xia,
Ameet Deshpande,
Karthik Narasimhan
Evaluates various instruction selection methods for in-context learning
Covers 13 models, 9 tasks, with classification, QA, and generation
Omitting instructions often works best in few-shot settings
Simple generic instructions are very competitive
Automatically generated instructions don't generalize well
Instructing Large Language Models
This paper evaluates different techniques for providing instructions to large language models when getting them to perform tasks in context. It finds that omitting instructions or using simple generic instructions often works better than more complex automatically generated instructions.
Training language models with instructions
Using language models for personalized recommendation
Evaluating instruction overrides in AI models
Exploring the effects of instruction format consistency in language model tuning
Evaluating language models on following non-standard instructions
Enhancing language model robustness with code instructions
No comments yet, be the first to start the conversation...
Sign up to comment on this paper