Domain Adaptive Video Segmentation via Temporal Consistency Regularization
Updates
- 08/2021: check out our domain adaptation for sematic segmentation paper RDA: Robust Domain Adaptation via Fourier Adversarial Attacking (accepted to ICCV 2021). This paper presents RDA, a robust domain adaptation technique that introduces adversarial attacking to mitigate overfitting in UDA. Code avaliable.
- 06/2021: check out our domain adaptation for panoptic segmentation paper Cross-View Regularization for Domain Adaptive Panoptic Segmentation (accepted to CVPR 2021). We design a domain adaptive panoptic segmentation network that exploits inter-style consistency and inter-task regularization for optimal domain adaptation in panoptic segmentation.Code avaliable.
- 06/2021: check out our domain generalization paper FSDR: Frequency Space Domain Randomization for Domain Generalization (accepted to CVPR 2021). Inspired by the idea of JPEG that converts spatial images into multiple frequency components (FCs), we propose Frequency Space Domain Randomization (FSDR) that randomizes images in frequency space by keeping domain-invariant FCs and randomizing domain-variant FCs only. Code avaliable.
- 06/2021: check out our domain adapation for sematic segmentation paper Scale variance minimization for unsupervised domain adaptation in image segmentation (accepted to Pattern Recognition 2021). We design a scale variance minimization (SVMin) method by enforcing the intra-image semantic structure consistency in the target domain. Code avaliable.
- 06/2021: check out our domain adapation for object detection paper Uncertainty-Aware Unsupervised Domain Adaptation in Object Detection (accepted to IEEE TMM 2021). We design a uncertainty-aware domain adaptation network (UaDAN) that introduces conditional adversarial learning to align well-aligned and poorly-aligned samples separately in different manners. Code avaliable.
Paper
Domain Adaptive Video Segmentation via Temporal Consistency Regularization
Dayan Guan, Jiaxing Huang, Xiao Aoran, Shijian Lu
School of Computer Science and Engineering, Nanyang Technological University, Singapore
International Conference on Computer Vision, 2021.
If you find this code useful for your research, please cite our paper:
@inproceedings{guan2021domain,
title={Domain adaptive video segmentation via temporal consistency regularization},
author={Guan, Dayan and Huang, Jiaxing and Xiao, Aoran and Lu, Shijian},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={8053--8064},
year={2021}
}
Abstract
Video semantic segmentation is an essential task for the analysis and understanding of videos. Recent efforts largely focus on supervised video segmentation by learning from fully annotated data, but the learnt models often experience clear performance drop while applied to videos of a different domain. This paper presents DA-VSN, a domain adaptive video segmentation network that addresses domain gaps in videos by temporal consistency regularization (TCR) for consecutive frames of target-domain videos. DA-VSN consists of two novel and complementary designs. The first is cross-domain TCR that guides the prediction of target frames to have similar temporal consistency as that of source frames (learnt from annotated source data) via adversarial learning. The second is intra-domain TCR that guides unconfident predictions of target frames to have similar temporal consistency as confident predictions of target frames. Extensive experiments demonstrate the superiority of our proposed domain adaptive video segmentation network which outperforms multiple baselines consistently by large margins.
Installation
- Conda enviroment:
conda create -n DA-VSN python=3.6
conda activate DA-VSN
conda install -c menpo opencv
pip install torch==1.2.0 torchvision==0.4.0
- Clone the ADVENT:
git clone https://github.com/valeoai/ADVENT.git
pip install -e ./ADVENT
- Clone the repo:
git clone https://github.com/Dayan-Guan/DA-VSN.git
pip install -e ./DA-VSN
Preparation
- Dataset:
DA-VSN/data/Cityscapes/ % Cityscapes dataset root
DA-VSN/data/Cityscapes/leftImg8bit_sequence % leftImg8bit_sequence_trainvaltest
DA-VSN/data/Cityscapes/gtFine % gtFine_trainvaltest
DA-VSN/data/Viper/ % VIPER dataset root
DA-VSN/data/Viper/train/img % Modality: Images; Frames: *[0-9]; Sequences: 00-77; Format: jpg
DA-VSN/data/Viper/train/cls % Modality: Semantic class labels; Frames: *0; Sequences: 00-77; Format: png
DA-VSN/data/SynthiaSeq/ % SYNTHIA-Seq dataset root
DA-VSN/data/SynthiaSeq/SEQS-04-DAWN % SYNTHIA-SEQS-04-DAWN
- Pre-trained models: Download pre-trained models and put in
DA-VSN/pretrained_models
Optical Flow Estimation
- For quick preparation: Download the optical flow estimated from Cityscapes-Seq validation set here and unzip in
DA-VSN/data
- Clone the flownet2-pytorch:
git clone https://github.com/NVIDIA/flownet2-pytorch.git
- Download pre-trained FlowNet2 and put in
flownet2-pytorch/pretrained_models
DA-VSN/data/Cityscapes_val_optical_flow_scale512/ % unzip Cityscapes_val_optical_flow_scale512.zip
- Use the flownet2-pytorch to estimate optical flow
Evaluation on Pretrained Models
- VIPER → Cityscapes-Seq:
cd DA-VSN/davsn/scripts
python test.py --cfg configs/davsn_viper2city_pretrained.yml
- SYNTHIA-Seq → Cityscapes-Seq:
python test.py --cfg configs/davsn_syn2city_pretrained.yml
Training and Testing
- VIPER → Cityscapes-Seq:
cd DA-VSN/davsn/scripts
python train.py --cfg configs/davsn_viper2city.yml
python test.py --cfg configs/davsn_viper2city.yml
- SYNTHIA-Seq → Cityscapes-Seq:
python train.py --cfg configs/davsn_syn2city.yml
python test.py --cfg configs/davsn_syn2city.yml
Acknowledgements
This codebase is heavily borrowed from ADVENT and flownet2-pytorch.
Contact
If you have any questions, please contact: [email protected]