Semi-supevised Semantic Segmentation with High- and Low-level Consistency

Overview

Semi-supevised Semantic Segmentation with High- and Low-level Consistency

This Pytorch repository contains the code for our work Semi-supervised Semantic Segmentation with High- and Low-level Consistency. The approach uses two network branches that link semi-supervised classification with semi-supervised segmentation including self-training. The approach attains significant improvement over existing methods, especially when trained with very few labeled samples. On several standard benchmarks - PASCAL VOC 2012,PASCAL-Context, and Cityscapes - the approach achieves new state-of-the-art in semi-supervised learning.

We propose a two-branch approach to the task of semi-supervised semantic segmentation. The lower branch predicts pixel-wise class labels and is referred to as the Semi-Supervised Semantic Segmentation GAN(s4GAN). The upper branch performs image-level classification and is denoted as the Multi-Label Mean Teacher(MLMT).

Here, this repository contains the source code for the s4GAN branch. MLMT branch is adapted from Mean-Teacher work for semi-supervised classification. Instructions for setting up the MLMT branch are given below.

Package pre-requisites

The code runs on Python 3 and Pytorch 0.4 The following packages are required.

pip install scipy tqdm matplotlib numpy opencv-python

Dataset preparation

Download ImageNet pretrained Resnet-101(Link) and place it ./pretrained_models/

PASCAL VOC

Download the dataset(Link) and extract in ./data/voc_dataset/

PASCAL Context

Download the annotations(Link) and extract in ./data/pcontext_dataset/

Cityscapes

Download the dataset from the Cityscapes dataset server(Link). Download the files named 'gtFine_trainvaltest.zip', 'leftImg8bit_trainvaltest.zip' and extract in ./data/city_dataset/

Training and Validation on PASCAL-VOC Dataset

Results in the paper are averaged over 3 random splits. Same splits are used for reporting baseline performance for fair comparison.

Training fully-supervised Baseline (FSL)

python train_full.py    --dataset pascal_voc  \
                        --checkpoint-dir ./checkpoints/voc_full \
                        --ignore-label 255 \
                        --num-classes 21 

Training semi-supervised s4GAN (SSL)

python train_s4GAN.py   --dataset pascal_voc  \
                        --checkpoint-dir ./checkpoints/voc_semi_0_125 \
                        --labeled-ratio 0.125 \
                        --ignore-label 255 \ 
                        --num-classes 21

Validation

python evaluate.py --dataset pascal_voc  \
                   --num-classes 21 \
                   --restore-from ./checkpoints/voc_semi_0_125/VOC_30000.pth 

Training MLMT Branch

python train_mlmt.py \
        --batch-size-lab 16 \
        --batch-size-unlab 80 \
        --labeled-ratio 0.125 \
        --exp-name voc_semi_0_125_MLMT \
        --pkl-file ./checkpoints/voc_semi_0_125/train_voc_split.pkl

Final Evaluation S4GAN + MLMT

python evaluate.py --dataset pascal_voc  \
                   --num-classes 21 \
                   --restore-from ./checkpoints/voc_semi_0_125/VOC_30000.pth \
                   --with-mlmt \
                   --mlmt-file ./mlmt_output/voc_semi_0_125_MLMT/output_ema_raw_100.txt
    

Training and Validation on PASCAL-Context Dataset

python train_full.py    --dataset pascal_context  \
                        --checkpoint-dir ./checkpoints/pc_full \
                        --ignore-label -1 \
                        --num-classes 60

python train_s4GAN.py  --dataset pascal_context  \
                       --checkpoint-dir ./checkpoints/pc_semi_0_125 \
                       --labeled-ratio 0.125 \
                       --ignore-label -1 \
                       --num-classes 60 \
                       --split-id ./splits/pc/split_0.pkl
                       --num-steps 60000

python evaluate.py     --dataset pascal_context  \
                       --num-classes 60 \
                       --restore-from ./checkpoints/pc_semi_0_125/VOC_40000.pth

Training and Validation on Cityscapes Dataset

python train_full.py    --dataset cityscapes \
                        --checkpoint-dir ./checkpoints/city_full_0_125 \
                        --ignore-label 250 \
                        --num-classes 19 \
                        --input-size '256,512'  

python train_s4GAN.py   --dataset cityscapes \
                        --checkpoint-dir ./checkpoints/city_semi_0_125 \
                        --labeled-ratio 0.125 \
                        --ignore-label 250 \
                        --num-classes 19 \
                        --split-id ./splits/city/split_0.pkl \
                        --input-size '256,512' \
                        --threshold-st 0.7 \
                        --learning-rate-D 1e-5 

python evaluate.py      --dataset cityscapes \
                        --num-classes 19 \
                        --restore-from ./checkpoints/city_semi_0_125/VOC_30000.pth 

Acknowledgement

Parts of the code have been adapted from: DeepLab-Resnet-Pytorch, AdvSemiSeg, PyTorch-Encoding

Citation

@ARTICLE{8935407,
  author={S. {Mittal} and M. {Tatarchenko} and T. {Brox}},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={Semi-Supervised Semantic Segmentation With High- and Low-Level Consistency}, 
  year={2021},
  volume={43},
  number={4},
  pages={1369-1379},
  doi={10.1109/TPAMI.2019.2960224}}
Comments
  • about Multi-Label Mean-Teacher branch

    about Multi-Label Mean-Teacher branch

    I don't exactly understand how adapt the Mean-teacher code to our segmentation code.So,can you explain it in detail or release the segment code with Mean-teacher model?Thanks!!!

    opened by Lufei-github 5
  • The results are not consisstent

    The results are not consisstent

    I run the command --dataset pascal_voc --checkpoint-dir ./checkpoints/voc_semi_0_125 --labeled-ratio 0.125 --ignore-label 255 --num-classes 21 and achieve 64.11%, that paper says 65.4% image --dataset pascal_voc --checkpoint-dir ./checkpoints/voc_semi_0_02 --labeled-ratio 0.02 --ignore-label 255 --num-classes 21 and achieve 53.59%, that paper says 58.1%, what's the problems? image

    opened by chouqin3 3
  • Labels for VOC

    Labels for VOC

    Hello !

    Good job ! Thanks for making it available.

    I was wondering what is the purpose of using SegmentationClassAug as g.t labels instead of SegmentationClass ?

    Akmaral

    opened by akmaralAW 2
  • MSCOCO pretrained weights and training script?

    MSCOCO pretrained weights and training script?

    Hey, thank you for sharing your code. In your paper, you report the numbers with pre-training on MS-COCO. Can you share the pre-trained weights for MSCOCO and also the training script, if the hyperparameters are different?

    opened by euwern 2
  • Training Settings

    Training Settings

    @sud0301 Hi! Thanks for releasing your code. It seems your training setting is a little different from baseline. They use 20k iteration while you use 40k.

    opened by lxtGH 2
  • A question about trainning process

    A question about trainning process

    Hello , Thanks for your code.I have a small question: How to continue training the dataset with saved model? I have trained 10000iters and saved 10000.pth in the folder, then i want to start from 10000 to the end , Thanks

    opened by xysaber 1
  • why use Ignore label 250 for cityscapes dataset ,why not 255?

    why use Ignore label 250 for cityscapes dataset ,why not 255?

    Kindly tell how to decide whether to use 250 or 255 as in train ids there are no 250 defined train ids in cityscapes dataset according to https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/helpers/labels.py

    image

    opened by haiderasad 0
  • About cosine_loss in MLMT

    About cosine_loss in MLMT

    Hi authors, Thank you very much for sharing the code! In the paper, consistency loss is used for all available samples, but Why you only compute unlabeled data in the code? Is there any wrong with train_mlmt.py?

    image

    image

    opened by HHuiwen 0
  • IOU resluts problem about train_s4GAN.py

    IOU resluts problem about train_s4GAN.py

    Hello, thank you for your outstanding research work! I encountered such a problem. When I ran train_s4GAN.py (labeled-ratio 0.125/PASCAL-VOC Dataset/number_steps 35000), the best IOU I got was about 0.65, which is quite different from the 0.698 in the paper . The following is my visualization, Can you give me some suggestions, thank you! %AQ5$STJ_E1YZFS~9X6DG J 7JZ7PHW(6 QQH0CG_)_EA15

    opened by whistlefancy 2
  • save_output_images for cityscapes?

    save_output_images for cityscapes?

    if args.save_output_images: if args.dataset == 'pascal_voc': filename = os.path.join(args.save_dir, '{}.png'.format(name[0])) color_file = Image.fromarray(colorize(output).transpose(1, 2, 0), 'RGB') color_file.save(filename) elif args.dataset == 'pascal_context': filename = os.path.join(args.save_dir, filename[0]) scipy.misc.imsave(filename, gt)

    why the results of cityscapes evaluation not saved?

    opened by haiderasad 2
Owner
null
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)

Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CAC) Xin Lai*, Zhuotao Tian*, Li Jiang, Shu Liu, Hengshuang Zhao, Li

DV Lab 137 Dec 14, 2022
ISBI 2022: Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image.

Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image Introduction This repository contains the PyTorch implem

null 25 Nov 9, 2022
Code for HLA-Face: Joint High-Low Adaptation for Low Light Face Detection (CVPR21)

HLA-Face: Joint High-Low Adaptation for Low Light Face Detection The official PyTorch implementation for HLA-Face: Joint High-Low Adaptation for Low L

Wenjing Wang 77 Dec 8, 2022
Self-supervised Augmentation Consistency for Adapting Semantic Segmentation (CVPR 2021)

Self-supervised Augmentation Consistency for Adapting Semantic Segmentation This repository contains the official implementation of our paper: Self-su

Visual Inference Lab @TU Darmstadt 132 Dec 21, 2022
Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation)

Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation) Download Synthia dataset The model uses

null 32 Sep 21, 2022
Official Implementation and Dataset of "PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask and Group-Level Consistency", CVPR 2021

Portrait Photo Retouching with PPR10K Paper | Supplementary Material PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask an

null 184 Dec 11, 2022
Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning, CVPR 2021

Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning By Zhenda Xie*, Yutong Lin*, Zheng Zhang, Yue Ca

Zhenda Xie 293 Dec 20, 2022
Implementation of "Unsupervised Domain Adaptive 3D Detection with Multi-Level Consistency"

Unsupervised Domain Adaptive 3D Detection with Multi-Level Consistency (ICCV2021) Paper Link: https://arxiv.org/abs/2107.11355 This implementation bui

null 32 Nov 17, 2022
Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation (CVPR 2021)

Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation Input Image Initial CAM Successive Maps with adversar

Jungbeom Lee 110 Dec 7, 2022
[cvpr22] Perturbed and Strict Mean Teachers for Semi-supervised Semantic Segmentation

PS-MT [cvpr22] Perturbed and Strict Mean Teachers for Semi-supervised Semantic Segmentation by Yuyuan Liu, Yu Tian, Yuanhong Chen, Fengbei Liu, Vasile

Yuyuan Liu 132 Jan 3, 2023
Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network."

R2RNet Official code of "R2RNet: Low-light Image Enhancement via Real-low to Real-normal Network." Jiang Hai, Zhu Xuan, Ren Yang, Yutong Hao, Fengzhu

null 77 Dec 24, 2022
Shape-aware Semi-supervised 3D Semantic Segmentation for Medical Images

SASSnet Code for paper: Shape-aware Semi-supervised 3D Semantic Segmentation for Medical Images(MICCAI 2020) Our code is origin from UA-MT You can fin

klein 125 Jan 3, 2023
[CVPR 2021] Semi-Supervised Semantic Segmentation with Cross Pseudo Supervision

TorchSemiSeg [CVPR 2021] Semi-Supervised Semantic Segmentation with Cross Pseudo Supervision by Xiaokang Chen1, Yuhui Yuan2, Gang Zeng1, Jingdong Wang

Chen XiaoKang 387 Jan 8, 2023
ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation

ST++ This is the official PyTorch implementation of our paper: ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation. Lihe Ya

Lihe Yang 147 Jan 3, 2023
[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

CodingMan 45 Dec 12, 2022
[CVPR 2022] Semi-Supervised Semantic Segmentation Using Unreliable Pseudo-Labels

Using Unreliable Pseudo Labels Official PyTorch implementation of Semi-Supervised Semantic Segmentation Using Unreliable Pseudo Labels, CVPR 2022. Ple

Haochen Wang 268 Dec 24, 2022
From Fidelity to Perceptual Quality: A Semi-Supervised Approach for Low-Light Image Enhancement (CVPR'2020)

Under-exposure introduces a series of visual degradation, i.e. decreased visibility, intensive noise, and biased color, etc. To address these problems, we propose a novel semi-supervised learning approach for low-light image enhancement.

Yang Wenhan 117 Jan 3, 2023
Pytorch Implementation for NeurIPS (oral) paper: Pixel Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation

Pixel-Level Cycle Association This is the Pytorch implementation of our NeurIPS 2020 Oral paper Pixel-Level Cycle Association: A New Perspective for D

null 87 Oct 19, 2022
CUDA Python Low-level Bindings

CUDA Python Low-level Bindings

NVIDIA Corporation 529 Jan 3, 2023