Paper Image

Efficient vision transformers for semantic segmentation

Published on:

11 October 2023

Primary Category:

Computer Vision and Pattern Recognition

Paper Authors:

Xu Zheng,

Yunhao Luo,

Pengyuan Zhou,

Lin Wang

Bullets

Key Details

Proposes visual-linguistic feature distillation to transfer visual and linguistic knowledge

Introduces pixel-wise decoupled distillation to separate target/non-target classes

Achieves state-of-the-art performance with 200%+ gain over prior methods

Enables efficient vision transformers with knowledge transferred from CNNs

Presents C2VKD framework - first for CNN-to-ViT distillation

AI generated summary

Efficient vision transformers for semantic segmentation

This paper proposes a method to transfer knowledge from large convolutional neural networks to compact vision transformer models for semantic segmentation. It introduces techniques to align heterogeneous representations and reduce the impact of teacher errors.

Answers from this paper

Comments

No comments yet, be the first to start the conversation...

Sign up to comment on this paper

Sign Up