git《FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding》(CVPR 2021) GitHub: [fig8]

Related tags

Deep Learning FSCE
Overview

FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding (CVPR 2021)

Language grade: Python This repo contains the implementation of our state-of-the-art fewshot object detector, described in our CVPR 2021 paper, FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding. FSCE is built upon the codebase FsDet v0.1, which released by an ICML 2020 paper Frustratingly Simple Few-Shot Object Detection.

FSCE Figure

Bibtex

@inproceedings{FSCEv1,
 author = {Sun, Bo and Li, Banghuai and Cai, Shengcai and Yuan, Ye and Zhang, Chi},
 title = {FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding},
 booktitle = {Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)},
 pages    = {TBD},
 month = {June},
 year = {2021}
}

Arxiv: https://arxiv.org/abs/2103.05950

Contact

If you have any questions, please contact Bo Sun (bos [at] usc.edu) or Banghuai Li(libanghuai [at] megvii.com)

Installation

FsDet is built on Detectron2. But you don't need to build detectron2 seperately as this codebase is self-contained. You can follow the instructions below to install the dependencies and build FsDet. FSCE functionalities are implemented as classand .py scripts in FsDet which therefore requires no extra build efforts.

Dependencies

  • Linux with Python >= 3.6
  • PyTorch >= 1.3
  • torchvision that matches the PyTorch installation
  • Dependencies: pip install -r requirements.txt
  • pycocotools: pip install cython; pip install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
  • fvcore: pip install 'git+https://github.com/facebookresearch/fvcore'
  • OpenCV, optional, needed by demo and visualization pip install opencv-python
  • GCC >= 4.9

Build

python setup.py build develop  # you might need sudo

Note: you may need to rebuild FsDet after reinstalling a different build of PyTorch.

Data preparation

We adopt the same benchmarks as in FsDet, including three datasets: PASCAL VOC, COCO and LVIS.

  • PASCAL VOC: We use the train/val sets of PASCAL VOC 2007+2012 for training and the test set of PASCAL VOC 2007 for evaluation. We randomly split the 20 object classes into 15 base classes and 5 novel classes, and we consider 3 random splits. The splits can be found in fsdet/data/datasets/builtin_meta.py.
  • COCO: We use COCO 2014 without COCO minival for training and the 5,000 images in COCO minival for testing. We use the 20 object classes that are the same with PASCAL VOC as novel classes and use the rest as base classes.
  • LVIS: We treat the frequent and common classes as the base classes and the rare categories as the novel classes.

The datasets and data splits are built-in, simply make sure the directory structure agrees with datasets/README.md to launch the program.

Code Structure

The code structure follows Detectron2 v0.1.* and fsdet.

  • configs: Configuration files (YAML) for train/test jobs.
  • datasets: Dataset files (see Data Preparation for more details)
  • fsdet
    • checkpoint: Checkpoint code.
    • config: Configuration code and default configurations.
    • data: Dataset code.
    • engine: Contains training and evaluation loops and hooks.
    • evaluation: Evaluation code for different datasets.
    • layers: Implementations of different layers used in models.
    • modeling: Code for models, including backbones, proposal networks, and prediction heads.
      • The majority of FSCE functionality are implemtended inmodeling/roi_heads/* , modeling/contrastive_loss.py, and modeling/utils.py
      • So one can first make sure FsDet v0.1 runs smoothly, and then refer to FSCE implementations and configurations.
    • solver: Scheduler and optimizer code.
    • structures: Data types, such as bounding boxes and image lists.
    • utils: Utility functions.
  • tools
    • train_net.py: Training script.
    • test_net.py: Testing script.
    • ckpt_surgery.py: Surgery on checkpoints.
    • run_experiments.py: Running experiments across many seeds.
    • aggregate_seeds.py: Aggregating results from many seeds.

Train & Inference

Training

We follow the eaact training procedure of FsDet and we use random initialization for novel weights. For a full description of training procedure, see here.

1. Stage 1: Training base detector.

python tools/train_net.py --num-gpus 8 \
        --config-file configs/PASCAL_VOC/base-training/R101_FPN_base_training_split1.yml

2. Random initialize weights for novel classes.

python tools/ckpt_surgery.py \
        --src1 checkpoints/voc/faster_rcnn/faster_rcnn_R_101_FPN_base1/model_final.pth \
        --method randinit \
        --save-dir checkpoints/voc/faster_rcnn/faster_rcnn_R_101_FPN_all1

This step will create a model_surgery.pth from model_final.pth.

Don't forget the --coco and --lvisoptions when work on the COCO and LVIS datasets, see ckpt_surgery.py for all arguments details.

3. Stage 2: Fine-tune for novel data.

python tools/train_net.py --num-gpus 8 \
        --config-file configs/PASCAL_VOC/split1/10shot_CL_IoU.yml \
        --opts MODEL.WEIGHTS WEIGHTS_PATH

Where WEIGHTS_PATH points to the model_surgery.pth generated from the previous step. Or you can specify it in the configuration yml.

Evaluation

To evaluate the trained models, run

python tools/test_net.py --num-gpus 8 \
        --config-file configs/PASCAL_VOC/split1/10shot_CL_IoU.yml \
        --eval-only

Or you can specify TEST.EVAL_PERIOD in the configuation yml to evaluate during training.

Multiple Runs

For ease of training and evaluation over multiple runs, fsdet provided several helpful scripts in tools/.

You can use tools/run_experiments.py to do the training and evaluation. For example, to experiment on 30 seeds of the first split of PascalVOC on all shots, run

python tools/run_experiments.py --num-gpus 8 \
        --shots 1 2 3 5 10 --seeds 0 30 --split 1

After training and evaluation, you can use tools/aggregate_seeds.py to aggregate the results over all the seeds to obtain one set of numbers. To aggregate the 3-shot results of the above command, run

python tools/aggregate_seeds.py --shots 3 --seeds 30 --split 1 \
        --print --plot
Comments
  • Training Error

    Training Error

    Dear authors,

    When I run your code, there is an error. The version of pytorch is 1.4.0 + 0.5.0 torchvision. Could you give me some advice? Thank you. QQ截图20210326190809

    opened by AmingWu 17
  • Schedule of shot3, shot5, shot10 of pascal voc.

    Schedule of shot3, shot5, shot10 of pascal voc.

    Sir, I cannot find the baseline config for pascal voc shot3, shot5, shot10, I guess the only change is the training schedule, could you tell me the schedule of them. Thanks in advance!

    opened by Retiina 7
  • Error about annotation path when fine-tuning.

    Error about annotation path when fine-tuning.

    Dear author, thanks for your great work. I successfully train the pertained model, but when I fine-tuning, it reports such error:

    FileNotFoundError: [Errno 2] No such file or directory: 'datasets/vocsplit/box_10shot_aeroplane_train.txt'.

    I have generate the shots by running python datasets/prepare_voc_few_shot.py.

    opened by Retiina 6
  • About the proposals feature cluster visualization

    About the proposals feature cluster visualization

    Hi, I wanna know what preprocessing, such as normalization, on proposals featuer should be done to visualize them using t-SNE? And can you share the code for visualization of proposals feature cluster ?

    opened by chengyu0910 6
  • Unable to reproduce the results on MS COCO datasets

    Unable to reproduce the results on MS COCO datasets

    opened by chenf99 5
  • failed when building

    failed when building

    When running the command "python setup.py build develop", I got an error--command 'gcc' failed with exit status 1. How can I fix it? Dose it means the version of gcc is not right?

    opened by jinweiLiu 5
  • runtinmeError:Not compiled with GPU support

    runtinmeError:Not compiled with GPU support

    When I run the code ,I met the following problem" RuntinmeError:Not compiled with GPU support " dose anyone meet the same problem? my environment is pytorch1.71 cuda 10.1 GRX 2080Ti

    opened by john2020-210 5
  • How to calculate the novel class AP in base training

    How to calculate the novel class AP in base training

    Thanks for your code, i'm confused with the results reported in Table 4 in the paper. As is suggested, when doing the base training using Baseline-FPN, the base AP50 on 5 shot is 67.9 and the novel AP50 is 49.6. I wonder how it is evaluated. I run this command python tools/train_net.py --num-gpus 2 --config-file configs/PASCAL_VOC/base_training/R101_FPN_base_training_split1.yaml

    The base AP50 is 77.458. I change the DATASETS.TEST in config file to (voc_2007_test_all1) to see the performance on novel class. However, it is 0.

    Please explain it, thanks.

    opened by Fly-dream12 4
  • train_net.py: error: unrecognized arguments: --opts

    train_net.py: error: unrecognized arguments: --opts

    I am confused why --opts can not use?

    python tools/train_net.py --num-gpus 8         \
    --config-file configs/PASCAL_VOC/split1/10shot_CL_IoU.yml \
    --opts MODEL.WEIGHTS  checkpoints/voc/faster_rcnn/faster_rcnn_R_101_FPN_all1/model_reset_surgery.pth
    
    opened by Wei-i 3
  • Variants about MoCo and Prototype

    Variants about MoCo and Prototype

    In your code, there are some variants based on MoCo ("eg, ContrastiveROIHeadsWithStorage") or Prototype ("eg, ContrastiveROIHeadsWithPrototype"). What is the final result (nAP) of these two variants? Will it be much lower than the optimal result reported in the paper?

    opened by Chen-Song 3
  • built error

    built error

    I met the error: NotADirectoryError: [Errno 20] Not a directory: '/home/FSCE/fsdet/model_zoo/configs' when I run python setup.py build develop, how can I solve it ? Waiting for your reply!

    opened by zzzzjx 2
  • Hello~Where should this code be added?

    Hello~Where should this code be added?

    You can use the code below to visualize the features.

    def plot_embedding(data, label, title,show=None):
        # param data:data
        # param label:label
        # param title:title of output
        # param show:(int) if you have too much proposals to draw, you can draw part of them
        # return: tsne-image
        
        if show is not None:
            temp = [i for i in range(len(data))]
            random.shuffle(temp)
            data = data[temp]
            data = data[:show]
            label = torch.tensor(label)[temp]
            label = label[:show]
            label.numpy().tolist()
    
        x_min, x_max = np.min(data, 0), np.max(data, 0)
        data = (data - x_min) / (x_max - x_min) # norm data
        fig = plt.figure() 
    
        # go through all the samples
        data = data.tolist()
        label = label.squeeze().tolist()
        
        for i in range(len(data)):
            plt.text(data[i][0], data[i][1], ".",fontsize=18, color=plt.cm.tab20(label[i] / 20))
        plt.title(title, fontsize=14)
        return fig
    
    # weight:(n proposals * 1024-D) input of the classifier
    # label: the label of the proposals/ground truth 
    # we only select foreground proposals to visualize
    # you can try to visualize the weight of different classes by extracting weight during training or testing stage
    
    ts = TSNE(n_components=2,init='pca', random_state=0)
    weight = ts.fit_transform(weight)
    fig = plot_embedding(weight, label, 't-SNE feature child')
    plt.show()
    

    Originally posted by @Chauncy-Cai in https://github.com/megvii-research/FSCE/issues/3#issuecomment-802534849

    opened by MrCrightH 0
  • --opts    KeyError: 'Non-existent config key: MODEL.WETGHTS'

    --opts KeyError: 'Non-existent config key: MODEL.WETGHTS'

    Traceback (most recent call last): File "tools/train_net.py", line 124, in launch( File "/home/sjk/mrch/FSCE/fsdet/engine/launch.py", line 52, in launch main_func(*args) File "tools/train_net.py", line 100, in main cfg = setup(args) File "tools/train_net.py", line 91, in setup cfg.merge_from_file(args.config_file) File "/home/sjk/mrch/FSCE/fsdet/config/config.py", line 46, in merge_from_file self.merge_from_other_cfg(loaded_cfg) File "/home/sjk/anaconda3/envs/chpy/lib/python3.8/site-packages/fvcore/common/config.py", line 132, in merge_from_other_cfg return super().merge_from_other_cfg(cfg_other) File "/home/sjk/anaconda3/envs/chpy/lib/python3.8/site-packages/yacs/config.py", line 217, in merge_from_other_cfg _merge_a_into_b(cfg_other, self, self, []) File "/home/sjk/anaconda3/envs/chpy/lib/python3.8/site-packages/yacs/config.py", line 478, in _merge_a_into_b _merge_a_into_b(v, b[k], root, key_list + [k]) File "/home/sjk/anaconda3/envs/chpy/lib/python3.8/site-packages/yacs/config.py", line 491, in _merge_a_into_b raise KeyError("Non-existent config key: {}".format(full_key)) KeyError: 'Non-existent config key: MODEL.WETGHTS'

    opened by MrCrightH 0
  • Train on myself dataset

    Train on myself dataset

    when i training on myself dataset, it will occurs this error, I already change categories of coco -- Process 1 terminated with the following error: Traceback (most recent call last): File "/data/ppw/anaconda3/envs/PyTorch_cu111/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap fn(i, *args) File "/data/ppw/competition/FSCE/fsdet/engine/launch.py", line 84, in _distributed_worker main_func(*args) File "/data/ppw/competition/FSCE/train_net.py", line 114, in main trainer = Trainer(cfg) File "/data/ppw/competition/FSCE/fsdet/engine/defaults.py", line 268, in __init__ data_loader = self.build_train_loader(cfg) File "/data/ppw/competition/FSCE/train_net.py", line 82, in build_train_loader return build_detection_train_loader(cfg, mapper=mapper) File "/data/ppw/competition/FSCE/fsdet/data/build.py", line 217, in build_detection_train_loader dataset_dicts = get_detection_dataset_dicts( File "/data/ppw/competition/FSCE/fsdet/data/build.py", line 178, in get_detection_dataset_dicts print_instances_class_histogram(dataset_dicts, class_names) File "/data/ppw/competition/FSCE/fsdet/data/build.py", line 97, in print_instances_class_histogram data.extend([None] * (N_COLS - (len(data) % N_COLS))) ZeroDivisionError: integer division or modulo by zero

    opened by PANPEIWEN 0
  • I can not get the same results in split 1 and shot 10? I just have one gpu 3090.

    I can not get the same results in split 1 and shot 10? I just have one gpu 3090.

    I just have one gpu 3090. And I do not change config, like lr and batchsize, how can I get the same result with paper? My nAP50 = 60.8 (paper = 63.4) final model (http://dl.yf.io/fs-det/model/) datasets (http://dl.yf.io/fs-det/datasets/vocsplit/*.txt)

    opened by kike-0304 4
Owner
null
PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud, CVPR 2019.

PointRCNN PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud Code release for the paper PointRCNN:3D Object Proposal Generation a

Shaoshuai Shi 1.5k Dec 27, 2022
git《Tangent Space Backpropogation for 3D Transformation Groups》(CVPR 2021) GitHub:1]

LieTorch: Tangent Space Backpropagation Introduction The LieTorch library generalizes PyTorch to 3D transformation groups. Just as torch.Tensor is a m

Princeton Vision & Learning Lab 482 Jan 6, 2023
Official repository for Few-shot Image Generation via Cross-domain Correspondence (CVPR '21)

Few-shot Image Generation via Cross-domain Correspondence Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zh

Utkarsh Ojha 251 Dec 11, 2022
Novel Instances Mining with Pseudo-Margin Evaluation for Few-Shot Object Detection

Novel Instances Mining with Pseudo-Margin Evaluation for Few-Shot Object Detection (NimPme) The official implementation of Novel Instances Mining with

null 12 Sep 8, 2022
Few-NERD: Not Only a Few-shot NER Dataset

Few-NERD: Not Only a Few-shot NER Dataset This is the source code of the ACL-IJCNLP 2021 paper: Few-NERD: A Few-shot Named Entity Recognition Dataset.

THUNLP 319 Dec 30, 2022
Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"

T-Few This repository contains the official code for the paper: "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learni

null 220 Dec 31, 2022
[CVPR 2021] Few-shot 3D Point Cloud Semantic Segmentation

Few-shot 3D Point Cloud Semantic Segmentation Created by Na Zhao from National University of Singapore Introduction This repository contains the PyTor

null 117 Dec 27, 2022
Adaptive Prototype Learning and Allocation for Few-Shot Segmentation (CVPR 2021)

ASGNet The code is for the paper "Adaptive Prototype Learning and Allocation for Few-Shot Segmentation" (accepted to CVPR 2021) [arxiv] Overview data/

Gen Li 91 Dec 23, 2022
Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)'

SCL Introduction Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)' We evaluated our approach using two baseline

null 34 Oct 8, 2022
The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter

FAPIS The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter Introduction This repo is primari

Khoi Nguyen 8 Dec 11, 2022
git《Investigating Loss Functions for Extreme Super-Resolution》(CVPR 2020) GitHub:

Investigating Loss Functions for Extreme Super-Resolution NTIRE 2020 Perceptual Extreme Super-Resolution Submission. Our method ranked first and secon

Sejong Yang 0 Oct 17, 2022
[NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning

SoCo [NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning By Fangyun Wei*, Yue Gao*, Zhirong Wu, Han Hu,

Yue Gao 139 Dec 14, 2022
Spatial Contrastive Learning for Few-Shot Classification (SCL)

This repo contains the official implementation of Spatial Contrastive Learning for Few-Shot Classification (SCL), which presents of a novel contrastive learning method applied to few-shot image classification in order to learn more general purpose embeddings, and facilitate the test-time adaptation to novel visual categories.

Yassine 34 Dec 25, 2022
git《Beta R-CNN: Looking into Pedestrian Detection from Another Perspective》(NeurIPS 2020) GitHub:[fig3]

Beta R-CNN: Looking into Pedestrian Detection from Another Perspective This is the pytorch implementation of our paper "[Beta R-CNN: Looking into Pede

null 35 Sep 8, 2021
The Pytorch code of "Joint Distribution Matters: Deep Brownian Distance Covariance for Few-Shot Classification", CVPR 2022 (Oral).

DeepBDC for few-shot learning        Introduction In this repo, we provide the implementation of the following paper: "Joint Distribution Matters: Dee

FeiLong 116 Dec 19, 2022
git《Self-Attention Attribution: Interpreting Information Interactions Inside Transformer》(AAAI 2021) GitHub:

Self-Attention Attribution This repository contains the implementation for AAAI-2021 paper Self-Attention Attribution: Interpreting Information Intera

null 60 Dec 29, 2022
git《Pseudo-ISP: Learning Pseudo In-camera Signal Processing Pipeline from A Color Image Denoiser》(2021) GitHub: [fig5]

Pseudo-ISP: Learning Pseudo In-camera Signal Processing Pipeline from A Color Image Denoiser Abstract The success of deep denoisers on real-world colo

Yue Cao 51 Nov 22, 2022
PyTorch implementation for COMPLETER: Incomplete Multi-view Clustering via Contrastive Prediction (CVPR 2021)

Completer: Incomplete Multi-view Clustering via Contrastive Prediction This repo contains the code and data of the following paper accepted by CVPR 20

XLearning Group 72 Dec 7, 2022
Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs

Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs This is an implemetation of the paper Few-shot Relation Extraction via Baye

MilaGraph 36 Nov 22, 2022