Published on:
2 May 2024
Primary Category:
Computation and Language
Paper Authors:
Dhananjay Ashok,
Barnabas Poczos
Instruction tuning offers new approach to controllable text generation
Algorithm introduced to create constraint datasets without human curation
New ConGenBench benchmark compiled across 17 datasets and 18 constraints
Prompt-based methods outperform specialized generation methods
Performance competitive with humans on stylistic tasks, gaps remain on structural constraints
Instruction tuning enables controllable text generation
This paper explores using instruction tuning of large language models as an approach to controllable text generation. The authors introduce an algorithm to automatically generate constraint datasets from only a task dataset and natural language description. They benchmark instruction-tuned models on a new testbed, ConGenBench, finding that prompting outperforms other controllable generation methods, although there are still challenges with structural constraints.
Evaluating Large Language Models on Controlled Text Generation
Training language models with instructions
Demystifying Instruction Tuning: A Comprehensive Analysis of Methods for Training Language Models...
Exploring the effects of instruction format consistency in language model tuning
Instructing Large Language Models
Using large language models to augment training data for smaller models
No comments yet, be the first to start the conversation...
Sign up to comment on this paper