Paper Image

Training versatile AI agents

Published on:

8 February 2024

Primary Category:

Artificial Intelligence

Paper Authors:

Zane Durante,

Bidipta Sarkar,

Ran Gong,

Rohan Taori,

Yusuke Noda,

Paul Tang,

Ehsan Adeli,

Shrinidhi Kowshika Lakshmikanth,

Kevin Schulman,

Arnold Milstein,

Demetri Terzopoulos,

Ade Famoti,

Noboru Kuno,

Ashley Llorens,

Hoi Vo,

Katsu Ikeuchi,

Li Fei-Fei,

Jianfeng Gao,

Naoki Wake,

Qiuyuan Huang

Bullets

Key Details

Proposes multi-task framework to train versatile AI agents

Uses robotics, gaming, video and text data for pre-training

Shows agent capabilities in robotics, gaming, and healthcare

Model understands text, images, video and can take actions

Approach enables developing generalist interactive systems

AI generated summary

Training versatile AI agents

This paper proposes a framework for training artificial intelligence agents that can understand and act in a variety of contexts. The framework uses a multi-task training approach to enable agents to process visual, language, and action data from diverse domains like robotics, gaming, and healthcare. After training, the model can generate relevant outputs and take sensible actions when tested in interactive settings.

Answers from this paper

Comments

No comments yet, be the first to start the conversation...

Sign up to comment on this paper

Sign Up