Paper Image

Consistent view synthesis without 3D

Published on:

7 December 2023

Primary Category:

Computer Vision and Pattern Recognition

Paper Authors:

Chuanxia Zheng,

Andrea Vedaldi

Bullets

Key Details

Encodes target views via distributed per-pixel ray conditioning

Improves multi-view consistency with attention & noise sharing

Avoids expensive 3D representations used in other works

Demonstrates excellent generalization to unseen categories

Outperforms recent state-of-the-art models on pose accuracy

AI generated summary

Consistent view synthesis without 3D

This paper introduces Free3D, a novel approach for generating consistent 360-degree novel views from a single input image, without needing slow and memory-intensive explicit 3D representations. It achieves this through a new per-pixel ray conditioning technique to precisely encode target camera poses, alongside multi-view attention and noise sharing for improved consistency.

Answers from this paper

Comments

No comments yet, be the first to start the conversation...

Sign up to comment on this paper

Sign Up