Paper Title:
A Survey on Occupancy Perception for Autonomous Driving: The Information Fusion Perspective
Published on:
8 May 2024
Primary Category:
Computer Vision and Pattern Recognition
Paper Authors:
Huaiyuan Xu,
Junliang Chen,
Shiyu Meng,
Yi Wang,
Lap-Pui Chau
3D occupancy perception is an emerging trend for autonomous vehicle perception systems
It fuses multi-source input data to capture 3D environments
Key methodologies involve 2D-to-3D transformations, multi-view and multi-frame fusion
It shows promise for precise scene understanding to enable autonomous driving tasks
3D perception of vehicle surroundings
This paper surveys recent research on 3D occupancy perception, which seeks to capture detailed 3D structures around vehicles to enable autonomous driving systems to precisely understand complex environments. It highlights that occupancy perception combines inputs from multiple sensors and fuses information across data sources. Key challenges include converting 2D images to 3D representations, integrating multi-camera and multi-frame observations, and training networks. The paper analyzes performance on datasets and discusses future opportunities like robust perception, generalized understanding via language models, and planning applications.
Learning 3D world models for autonomous driving
Collaborative 3D Semantic Scene Completion
Ego-centric 3D scene understanding
Predicting open-vocabulary 3D scene occupancy from images
Demystifying Autonomous Vehicle Perception: A Plain-Language Guide to the Science Behind Self-Dri...
3D reasoning for autonomous driving
No comments yet, be the first to start the conversation...
Sign up to comment on this paper