Published on:
6 December 2023
Primary Category:
Computer Vision and Pattern Recognition
Paper Authors:
Xiaobo Yang,
Xiaojin Gong
Proposes method to use CLIP and SAM for weakly supervised segmentation
Designs coarse-to-fine framework with learned prompts
Applies SAM-based seeding module to generate seeds
Achieves state-of-the-art on PASCAL VOC 2012
Foundation models for weakly supervised segmentation
This paper proposes a method to leverage foundation models like CLIP and SAM to generate high-quality segmentation masks using only image-level labels. A coarse-to-fine framework with learned prompts is designed to produce segmentation seeds. Experiments show state-of-the-art performance on PASCAL VOC 2012.
Weakly-supervised semantic segmentation with image labels
Fine-tuning models for medical image segmentation
Evaluating foundation models for dense recognition
Nuclei Segmentation with Weak Supervision
Weak Supervision for Semantic Segmentation in Driving Scenes
Panoptic open-vocabulary segmentation with SAM and CLIP
No comments yet, be the first to start the conversation...
Sign up to comment on this paper