CSD: Consistency-based Semi-supervised learning for object Detection

Overview
Comments
  • How do I train this model on a custom dataset?

    How do I train this model on a custom dataset?

    Hi @soo89,

    Wonderful work.

    Could you please help understand how can I train this semi-supervised model on my own dataset. I only have images and xml files. I have not worked with VOC/COCO datasets before. Could you please give some suggestions about where should I start? Do I need to split my dataset into labelled and unlabelled datasets to perform semi-supervised learning?

    I appreciate your response.

    opened by qqaadir 8
  • consistency_conf_loss

    consistency_conf_loss

    in paper, the consistency_conf_loss is JSD. but in the code, the consistency_conf_loss is sum of 2 KL image How do I understand this consistency_conf_loss_a and consistency_conf_loss_a. JSD should be like this image

    thanks

    opened by GUN-xing 2
  • The setting of --dataset

    The setting of --dataset

    Hi Jisoo, thanks for your nice work. could you tell me what --dataset is? Does it mean using the different configuration settings for different sizes of inputs? For example, VOC300 refers to the setting for an ssd300 model while VOC512 refers to ssd512?

    opened by anlimo1510 2
  • pytorch version

    pytorch version

    I'm interested in this paper. So I'd like to train this model on custom datasets, but failed to train. Could you tell me which version of pytorch and cuda you use? thank you!

    opened by namihagi 2
  • Localization consistency loss in csd.py

    Localization consistency loss in csd.py

    Hi author, Thanks for your wonderful work and your released code. I am confused about the code for the Localization consistency loss in https://github.com/soo89/CSD-SSD/blob/30b184c86a87b0fc2d301cd8157b11d8cfe7da1e/train_csd.py#L297.

    consistency_loc_loss_x = torch.mean(torch.pow(loc_sampled[:, 0] + loc_sampled_flip[:, 0], exponent=2))
      
    consistency_loc_loss_y = torch.mean(torch.pow(loc_sampled[:, 1] - loc_sampled_flip[:, 1], exponent=2))
                    
    consistency_loc_loss_w = torch.mean(torch.pow(loc_sampled[:, 2] - loc_sampled_flip[:, 2], exponent=2))
                    
    consistency_loc_loss_h = torch.mean(torch.pow(loc_sampled[:, 3] - loc_sampled_flip[:, 3], exponent=2))
    

    For y, w, and h you use '-' when calculating the mean squared loss. I am confused about why you use '+' for x when calculating the mean squared loss.

    opened by lianqing11 2
  • why flip twice?

    why flip twice?

    Thanks for you great work! read the code, I find some code which I can't to understand, as below, image why it need to flip append loc and append conf?


    by the way, we compare loc and flip loc in

    consistency_loc_loss_x = torch.mean(torch.pow(loc_sampled[:, 0] + loc_sampled_flip[:, 0], exponent=2))
    ...
    

    but, how we guarantee each item between loc_sampled and loc_sampled_flip can match correctly?

    Hope your kindly response, much thanks.

    opened by lxy5513 1
  • Single batch: Supervised or Unsupervised

    Single batch: Supervised or Unsupervised

    hi, in the experiment section, you mentioned that labeled and unlabeled data are randomly shuffled and selected. I wanted to know why it's necessary to have each mini-batch containing both labeled as well as unlabeled data?

    What happens if one of mini-batch contains only unlabeled data (since the majority of data is unlabeled)? Can we handle that case? What would be the value of loss?

    I'm new to semi-supervised. thanks.

    opened by AKASH2907 1
  • Training on GPUs

    Training on GPUs

    Hello,

    Thanks for your nice work. I just have few questions about training on GPUs.

    1. Do you train your CSD on a single GPU or multiple GPUs? If trained on single GPU, do you mind sharing the series code(what type of GPU)? I am asking this because I am not able to train CSD with batch size of 32 on a single 2080 Ti and I'm not sure whether the problem is on my server or the nature of the code.

    2. What's your estimated training time on VOC(07 labeled and 12 unlabeled) in your configuration? It looks like training a baseline SSD on a single GPU would take me several days in my case so it would be nice if you could share the stats....

    Thanks for your time and enjoy your day!

    opened by ZhuoranYu 1
  • The failure of download code

    The failure of download code

    Hello, Thanks for your nice work!

    When I downloaded your code, I tried many times but failed. I thought maybe beacause of your "weight" folder that had the 70 M pretrained model.

    Could you please delete the weight folder and offer another pretrained model link?

    Thank you very much!

    opened by zwy1996 1
  • ramp-down function for loss weight

    ramp-down function for loss weight

    Hi, thanks for providing such nice work! I have a minor question about the loss weight schedule. For the consistency loss that you've designed, is there any experimental results that ramp-down weight scheduling affects the performance? Ramp-up seems plausible for stable training but I'm not sure why ramp-down needed.

    opened by jihwanp 0
  • Understanding of Unlabeled Losses

    Understanding of Unlabeled Losses

    Hi,

    Thanks for sharing your code. I would like to inquire more about the JSD loss for classification consistency. specifically this line Link

    Why is .detach() specifically used here for computing the losses?

    And, I didn't understand how it impacts when you are detaching either output in each line? If I print requires_grad on those tensors it shows True.

    opened by AKASH2907 0
Owner
null
CVPR2022 paper "Dense Learning based Semi-Supervised Object Detection"

[CVPR2022] DSL: Dense Learning based Semi-Supervised Object Detection DSL is the first work on Anchor-Free detector for Semi-Supervised Object Detecti

Bhchen 69 Dec 8, 2022
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)

Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CAC) Xin Lai*, Zhuotao Tian*, Li Jiang, Shu Liu, Hengshuang Zhao, Li

Jia Research Lab 137 Dec 14, 2022
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)

Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CAC) Xin Lai*, Zhuotao Tian*, Li Jiang, Shu Liu, Hengshuang Zhao, Li

DV Lab 137 Dec 14, 2022
Group R-CNN for Point-based Weakly Semi-supervised Object Detection (CVPR2022)

Group R-CNN for Point-based Weakly Semi-supervised Object Detection (CVPR2022) By Shilong Zhang*, Zhuoran Yu*, Liyang Liu*, Xinjiang Wang, Aojun Zhou,

Shilong Zhang 129 Dec 24, 2022
Semi-Supervised Learning, Object Detection, ICCV2021

End-to-End Semi-Supervised Object Detection with Soft Teacher By Mengde Xu*, Zheng Zhang*, Han Hu, Jianfeng Wang, Lijuan Wang, Fangyun Wei, Xiang Bai,

Microsoft 789 Dec 27, 2022
Data-Uncertainty Guided Multi-Phase Learning for Semi-supervised Object Detection

An official implementation of paper Data-Uncertainty Guided Multi-Phase Learning for Semi-supervised Object Detection

null 11 Nov 23, 2022
PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection

Unbiased Teacher for Semi-Supervised Object Detection This is the PyTorch implementation of our paper: Unbiased Teacher for Semi-Supervised Object Detection

Facebook Research 366 Dec 28, 2022
Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework

This repo is the official implementation of "Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework". @inproceedings{zhou2021insta

null 34 Dec 31, 2022
Project looking into use of autoencoder for semi-supervised learning and comparing data requirements compared to supervised learning.

Project looking into use of autoencoder for semi-supervised learning and comparing data requirements compared to supervised learning.

Tom-R.T.Kvalvaag 2 Dec 17, 2021
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning

UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning This is the official PyTorch implementation for UniMoCo pape

dddzg 49 Jan 2, 2023
Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection

Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection Main requirements torch >= 1.0 torchvision >= 0.2.0 Python 3 Environm

null 15 Apr 4, 2022
[CVPR 2021] MiVOS - Mask Propagation module. Reproduced STM (and better) with training code :star2:. Semi-supervised video object segmentation evaluation.

MiVOS (CVPR 2021) - Mask Propagation Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang [arXiv] [Paper PDF] [Project Page] [Papers with Code] This repo impleme

Rex Cheng 106 Jan 3, 2023
Semi-Supervised 3D Hand-Object Poses Estimation with Interactions in Time

Semi Hand-Object Semi-Supervised 3D Hand-Object Poses Estimation with Interactions in Time (CVPR 2021).

null 96 Dec 27, 2022
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning This is the PyTorch implementation of our paper: FeatMatch: Feature-Based Augmentat

null 43 Nov 19, 2022
Semi-supervised Representation Learning for Remote Sensing Image Classification Based on Generative Adversarial Networks

SSRL-for-image-classification Semi-supervised Representation Learning for Remote Sensing Image Classification Based on Generative Adversarial Networks

Feng 2 Nov 19, 2021
This is the repository for the AAAI 21 paper [Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning].

CG3 This is the repository for the AAAI 21 paper [Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning]. R

null 12 Oct 28, 2022
Self-supervised Augmentation Consistency for Adapting Semantic Segmentation (CVPR 2021)

Self-supervised Augmentation Consistency for Adapting Semantic Segmentation This repository contains the official implementation of our paper: Self-su

Visual Inference Lab @TU Darmstadt 132 Dec 21, 2022
the official code for ICRA 2021 Paper: "Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation"

G2S This is the official code for ICRA 2021 Paper: Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation by Hemang

NeurAI 4 Jul 27, 2022
Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency[ECCV 2020]

Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency(ECCV 2020) This is an official python implementati

null 304 Jan 3, 2023