Recognition: unknown
Learning to Fuse Things and Stuff
read the original abstract
We propose an end-to-end learning approach for panoptic segmentation, a novel task unifying instance (things) and semantic (stuff) segmentation. Our model, TASCNet, uses feature maps from a shared backbone network to predict in a single feed-forward pass both things and stuff segmentations. We explicitly constrain these two output distributions through a global things and stuff binary mask to enforce cross-task consistency. Our proposed unified network is competitive with the state of the art on several benchmarks for panoptic segmentation as well as on the individual semantic and instance segmentation tasks.
This paper has not been read by Pith yet.
Forward citations
Cited by 1 Pith paper
-
MambaPanoptic: A Vision Mamba-based Structured State Space Framework for Panoptic Segmentation
MambaPanoptic replaces CNN and transformer components with Mamba blocks in a feature pyramid and kernel generator, achieving higher panoptic quality than PanopticDeepLab and PanopticFCN on Cityscapes and COCO while us...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.