Paper Image

Reconstructing Scenes without Rain from Images

Published on:

17 April 2024

Primary Category:

Computer Vision and Pattern Recognition

Paper Authors:

Xianqiang Lyu,

Hui Liu,

Junhui Hou


Key Details

Proposes RainyScape method to reconstruct rain-free scenes from multi-view rainy images

Employs neural rendering module to obtain low-frequency scene representation

Introduces rain prediction module and rain embedding to model rain characteristics

Defines adaptive gradient-based loss to distinguish rain streaks from scene details

Achieves state-of-the-art performance in removing rain and rendering clean images

AI generated summary

Reconstructing Scenes without Rain from Images

This paper proposes RainyScape, an unsupervised method to reconstruct clear, rain-free scenes from collections of rainy images taken from multiple viewpoints. It consists of two main components: a neural rendering module to obtain a low-frequency scene representation, and a rain prediction module that models rain characteristics and distinguishes rain from scene details. By optimizing these jointly, guided by a specialized gradient-based loss, the method effectively eliminates rain streaks and renders high quality images. Experiments on neural radiance fields and 3D Gaussian splatting demonstrate state-of-the-art performance in reconstructing scenes without rain. The framework is flexible and the constructed dataset enables further research.

Answers from this paper


No comments yet, be the first to start the conversation...

Sign up to comment on this paper

Sign Up