Zero-Shot Video Semantic Segmentation based on Pre-Trained Diffusion Models

Qian Wang1, Abdelrahman Eldesokey1, Mohit Mendiratta2, Fangneng Zhan2, Adam Kortylewski2, Christian Theobalt2, Peter Wonka1
1KAUST, 2MPI for Informatics

From left to right, we show the input video from VSPW dataset and the segmentation masks generated by EmerDiff (SD), Ours (SVD) and Ours (SD), respectively.

Abstract

We introduce the first zero-shot approach for Video Semantic Segmentation (VSS) based on pre-trained diffusion models.

A growing research direction attempts to employ diffusion models to perform downstream vision tasks by exploiting their deep understanding of image semantics. Yet, the majority of these approaches have focused on image-related tasks like semantic correspondence and segmentation, with less emphasis on video tasks such as VSS. Ideally, diffusion-based image semantic segmentation approaches can be applied to videos in a frame-by-frame manner. However, we find their performance on videos to be subpar due to the absence of any modeling of temporal information inherent in the video data. To this end, we tackle this problem and introduce a framework tailored for VSS based on pre-trained image and video diffusion models. We propose building a scene context model based on the diffusion features, where the model is autoregressively updated to adapt to scene changes. This context model predicts per-frame coarse segmentation maps that are temporally consistent. To refine these maps further, we propose a correspondence-based refinement strategy that aggregates predictions temporally, resulting in more confident predictions. Finally, we introduce a masked modulation approach to upsample the coarse maps to the full resolution at a high quality. Experiments show that our proposed approach outperforms existing zero-shot image semantic segmentation approaches significantly on various VSS benchmarks without any training or fine-tuning. Moreover, it closely rivals supervised Video Object Segmentation (VSS) approaches on the VSPW dataset despite not being explicitly trained for VSS.

Method

Main workflow
Main work flow of our paper. In Stage 1, we initialize a Scene Context Model with the aggregated diffusion features. In Stage 2, we use the context model to predict coarse masks for the remaining frames. In Stage 3, we use the refined coarse masks to modulate the attention layers of the diffusion process.

More results

From left to right, we show the input video from Cityscapes dataset and the segmentation masks generated by EmerDiff (SD), Ours (SVD) and Ours (SD), respectively.



From left to right, we show the input video from CamVid dataset and the segmentation masks generated by EmerDiff (SD), Ours (SVD) and Ours (SD), respectively.