From left to right, we show the input video from VSPW dataset and the segmentation masks generated by EmerDiff (SD), Ours (SVD) and Ours (SD), respectively.
We introduce the first zero-shot approach for Video Semantic Segmentation (VSS) based on pre-trained diffusion models.
A growing research direction attempts to employ diffusion models to perform downstream vision tasks by exploiting their deep understanding of image semantics. Yet, the majority of these approaches have focused on image-related tasks like semantic correspondence and segmentation, with less emphasis on video tasks such as VSS. Ideally, diffusion-based image semantic segmentation approaches can be applied to videos in a frame-by-frame manner. However, we find their performance on videos to be subpar due to the absence of any modeling of temporal information inherent in the video data. To this end, we tackle this problem and introduce a framework tailored for VSS based on pre-trained image and video diffusion models. We propose building a scene context model based on the diffusion features, where the model is autoregressively updated to adapt to scene changes. This context model predicts per-frame coarse segmentation maps that are temporally consistent. To refine these maps further, we propose a correspondence-based refinement strategy that aggregates predictions temporally, resulting in more confident predictions. Finally, we introduce a masked modulation approach to upsample the coarse maps to the full resolution at a high quality. Experiments show that our proposed approach outperforms existing zero-shot image semantic segmentation approaches significantly on various VSS benchmarks without any training or fine-tuning. Moreover, it closely rivals supervised Video Object Segmentation (VSS) approaches on the VSPW dataset despite not being explicitly trained for VSS.
From left to right, we show the input video from Cityscapes dataset and the segmentation masks generated by EmerDiff (SD), Ours (SVD) and Ours (SD), respectively.
From left to right, we show the input video from CamVid dataset and the segmentation masks generated by EmerDiff (SD), Ours (SVD) and Ours (SD), respectively.