Paper Image

Navigating the Forest of Thought: How Language Models Can Deliberately Reason and Problem-Solve

Published on:

17 May 2023

Primary Category:

Computation and Language

Paper Authors:

Shunyu Yao,

Dian Yu,

Jeffrey Zhao,

Izhak Shafran,

Thomas L. Griffiths,

Yuan Cao,

Karthik Narasimhan

Bullets

Key Details

Tree of Thoughts allows language models to search over diverse reasoning paths in a tree structure

Thoughts are coherent language sequences that serve as intermediate problem-solving steps

Models can look ahead, backtrack, and self-evaluate to guide systematic search

The approach improves performance in mathematical, deductive, and creative tasks

It's more flexible than rule-based search and more sample-efficient than end-to-end training

AI generated summary

Navigating the Forest of Thought: How Language Models Can Deliberately Reason and Problem-Solve

This paper proposes a framework called 'Tree of Thoughts' that allows language models to systematically explore different reasoning paths for complex problem solving. By searching over coherent thoughts as intermediate steps, evaluating progress, and looking ahead, language models can surpass their usual left-to-right limitations and achieve superior performance in deductive, mathematical, and creative tasks.

Answers from this paper

Comments

No comments yet, be the first to start the conversation...

Sign up to comment on this paper

Sign Up