[NeurIPS 2021] ORL: Unsupervised Object-Level Representation Learning from Scene Images

Overview

Unsupervised Object-Level Representation Learning from Scene Images

This repository contains the official PyTorch implementation of the ORL algorithm for self-supervised representation learning.

Unsupervised Object-Level Representation Learning from Scene Images,
Jiahao Xie, Xiaohang Zhan, Ziwei Liu, Yew Soon Ong, Chen Change Loy
In NeurIPS 2021
[Paper][Project Page][Bibtex]

highlights

Updates

  • [12/2021] Code and pre-trained models of ORL are released.

Installation

Please refer to INSTALL.md for installation and dataset preparation.

Models

Pre-trained models can be downloaded from Google Drive. Please see our paper for transfer learning results on different benchmarks.

Usage

Stage 1: Image-level pre-training

You need to pre-train an image-level contrastive learning model in this stage. Take BYOL as an example:

bash tools/dist_train.sh configs/selfsup/orl/coco/stage1/r50_bs512_ep800.py 8

This stage can be freely replaced with other image-level contrastive learning models.

Stage 2: Correspondence discovery

  • KNN image retrieval

First, extract all features in the training set using the pre-trained model weights in Stage 1:

bash tools/dist_train.sh configs/selfsup/orl/coco/stage1/r50_bs512_ep800_extract_feature.py 8 --resume_from work_dirs/selfsup/orl/coco/stage1/r50_bs512_ep800/epoch_800.pth

Second, retrieve KNN for each image using tools/coco_knn_image_retrieval.ipynb. The corresponding KNN image ids will be saved as a json file train2017_knn_instance.json under data/coco/meta/.

  • RoI generation

Apply selective search to generate region proposals for all images in the training set:

bash tools/dist_selective_search_single_gpu.sh configs/selfsup/orl/coco/stage2/selective_search_train2017.py data/coco/meta/train2017_selective_search_proposal.json

The script and config only support single-image single-gpu inference since different images can have different number of generated region proposals by selective search, which cannot be gathered if distributed in multiple gpus. You can also directly download here under data/coco/meta/ if you want to skip this step.

  • RoI pair retrieval

Retrieve top-ranked RoI pairs:

bash tools/dist_generate_correspondence_single_gpu.sh configs/selfsup/orl/coco/stage2/r50_bs512_ep800_generate_all_correspondence.py work_dirs/selfsup/orl/coco/stage1/r50_bs512_ep800/epoch_800.pth data/coco/meta/train2017_knn_instance.json data/coco/meta/train2017_knn_instance_correspondence.json

The script and config also only support single-image single-gpu inference since different image pairs can have different number of generated inter-RoI pairs, which cannot be gathered if distributed in multiple gpus. A workaround to speed up the retrieval process is to split the whole dataset into several parts and process each part on each gpu in parallel. We provide an example of these configs (10 parts in total) in configs/selfsup/orl/coco/stage2/r50_bs512_ep800_generate_partial_correspondence/. After generating each part, you can use tools/merge_partial_correspondence_files.py to merge them together and save the final correspondence json file train2017_knn_instance_correspondence.json under data/coco/meta/.

Stage 3: Object-level pre-training

After obtaining the correspondence file in Stage 2, you can then perform object-level pre-training:

bash tools/dist_train.sh configs/selfsup/orl/coco/stage3/r50_bs512_ep800.py 8

Transferring to downstream tasks

Please refer to GETTING_STARTED.md for transferring to various downstream tasks.

Acknowledgement

We would like to thank the OpenSelfSup for its open-source project and PyContrast for its detection evaluation configs.

Citation

Please consider citing our paper in your publications if the project helps your research. BibTeX reference is as follows:

@inproceedings{xie2021unsupervised,
  title={Unsupervised Object-Level Representation Learning from Scene Images},
  author={Xie, Jiahao and Zhan, Xiaohang and Liu, Ziwei and Ong, Yew Soon and Loy, Chen Change},
  booktitle={NeurIPS},
  year={2021}
}
You might also like...
 Neural Scene Graphs for Dynamic Scene (CVPR 2021)
Neural Scene Graphs for Dynamic Scene (CVPR 2021)

Implementation of Neural Scene Graphs, that optimizes multiple radiance fields to represent different objects and a static scene background. Learned representations can be rendered with novel object compositions and views.

Official PyTorch code of DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context Graph and Relation-based Optimization (ICCV 2021 Oral).
Official PyTorch code of DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context Graph and Relation-based Optimization (ICCV 2021 Oral).

DeepPanoContext (DPC) [Project Page (with interactive results)][Paper] DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context G

Implementation of Self-supervised Graph-level Representation Learning with Local and Global Structure (ICML 2021).
Implementation of Self-supervised Graph-level Representation Learning with Local and Global Structure (ICML 2021).

Self-supervised Graph-level Representation Learning with Local and Global Structure Introduction This project is an implementation of ``Self-supervise

【ACMMM 2021】DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning
【ACMMM 2021】DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning

DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning (ACMMM 2021) Overview We release the code of the DSANet (Dynamic S

SafePicking: Learning Safe Object Extraction via Object-Level Mapping, ICRA 2022
SafePicking: Learning Safe Object Extraction via Object-Level Mapping, ICRA 2022

SafePicking Learning Safe Object Extraction via Object-Level Mapping Kentaro Wad

[CVPR 2021] Unsupervised Degradation Representation Learning for Blind Super-Resolution
[CVPR 2021] Unsupervised Degradation Representation Learning for Blind Super-Resolution

DASR Pytorch implementation of "Unsupervised Degradation Representation Learning for Blind Super-Resolution", CVPR 2021 [arXiv] Overview Requirements

This is the code for CVPR 2021 oral paper: Jigsaw Clustering for Unsupervised Visual Representation Learning

JigsawClustering Jigsaw Clustering for Unsupervised Visual Representation Learning Pengguang Chen, Shu Liu, Jiaya Jia Introduction This project provid

[NeurIPS 2021] Code for Unsupervised Learning of Compositional Energy Concepts

Unsupervised Learning of Compositional Energy Concepts This is the pytorch code for the paper Unsupervised Learning of Compositional Energy Concepts.

Dense Unsupervised Learning for Video Segmentation (NeurIPS*2021)
Dense Unsupervised Learning for Video Segmentation (NeurIPS*2021)

Dense Unsupervised Learning for Video Segmentation This repository contains the official implementation of our paper: Dense Unsupervised Learning for

Comments
  • BYOL object has no attribute module

    BYOL object has no attribute module

    Hello , I am getting an error in the first step. Plz guide me to solve this error .

    File "tools/train.py", line 142, in main() File "tools/train.py", line 132, in main train_model( File "/data2/dl/grp20/ORL/openselfsup/apis/train.py", line 97, in train_model _dist_train( File "/data2/dl/grp20/ORL/openselfsup/apis/train.py", line 233, in _dist_train runner.run(data_loaders, cfg.workflow, cfg.total_epochs) File "/data2/dl/grp20/grp20_env/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 122, in run epoch_runner(data_loaders[i], **kwargs) File "/data2/dl/grp20/grp20_env/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 29, in train self.call_hook('before_train_iter') File "/data2/dl/grp20/grp20_env/lib/python3.8/site-packages/mmcv/runner/base_runner.py", line 282, in call_hook getattr(hook, fn_name)(self) File "/data2/dl/grp20/ORL/openselfsup/hooks/byol_hook.py", line 26, in before_train_iter assert hasattr(runner.model.module, 'momentum'),
    File "/data2/dl/grp20/grp20_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 947, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'BYOL' object has no attribute 'module'

    opened by dekaartina 0
  • Problem loading checkpoint in downstream object detection

    Problem loading checkpoint in downstream object detection

    Hi,

    First of all I'd like to congratulate the authors for the great work and shared repo.

    I have successfully followed all 3 unsupervised stages of pre-training and executed the convert-pretrain-to-detectron2.py script to perform final conversion of the model.

    However, when using pretrained model for downstream task Object Detection it seems model does not get recognized (WARNING [02/27 20:12:26 d2.checkpoint.c2_model_loading]: No weights in checkpoint matched with model.) and no parameter is loaded as pretrained model. Am I missing anything? wrong configuration? Thank you in advance.

    Here the log:

    [02/27 20:12:26 fvcore.common.checkpoint]: [Checkpointer] Loading from /home/alejandro/satisai/py_workspace/ORL/weights/stage3_epoch_800_converted.pkl ...                                      [510/1839]
    [02/27 20:12:26 fvcore.common.checkpoint]: Reading a file from 'OpenSelfSup'
    WARNING [02/27 20:12:26 d2.checkpoint.c2_model_loading]: No weights in checkpoint matched with model.
    WARNING [02/27 20:12:26 fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint:
    backbone.bottom_up.res2.0.conv1.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res2.0.conv1.weight
    backbone.bottom_up.res2.0.conv2.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res2.0.conv2.weight
    backbone.bottom_up.res2.0.conv3.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res2.0.conv3.weight
    backbone.bottom_up.res2.0.shortcut.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res2.0.shortcut.weight
    backbone.bottom_up.res2.1.conv1.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res2.1.conv1.weight
    backbone.bottom_up.res2.1.conv2.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res2.1.conv2.weight
    backbone.bottom_up.res2.1.conv3.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res2.1.conv3.weight
    backbone.bottom_up.res2.2.conv1.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res2.2.conv1.weight
    backbone.bottom_up.res2.2.conv2.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res2.2.conv2.weight
    backbone.bottom_up.res2.2.conv3.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res2.2.conv3.weight
    backbone.bottom_up.res3.0.conv1.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res3.0.conv1.weight
    backbone.bottom_up.res3.0.conv2.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res3.0.conv2.weight
    backbone.bottom_up.res3.0.conv3.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res3.0.conv3.weight
    backbone.bottom_up.res3.0.shortcut.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res3.0.shortcut.weight
    backbone.bottom_up.res3.1.conv1.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res3.1.conv1.weight
    backbone.bottom_up.res3.1.conv2.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res3.1.conv2.weight
    backbone.bottom_up.res3.1.conv3.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res3.1.conv3.weight
    backbone.bottom_up.res3.2.conv1.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res3.2.conv1.weight
    backbone.bottom_up.res3.2.conv2.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res3.2.conv2.weight
    backbone.bottom_up.res3.2.conv3.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res3.2.conv3.weight
    backbone.bottom_up.res3.3.conv1.norm.{bias, running_mean, running_var, weight}                                                                                                                  [466/1839]
    backbone.bottom_up.res3.3.conv1.weight
    backbone.bottom_up.res3.3.conv2.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res3.3.conv2.weight
    backbone.bottom_up.res3.3.conv3.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res3.3.conv3.weight
    backbone.bottom_up.res4.0.conv1.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.0.conv1.weight
    backbone.bottom_up.res4.0.conv2.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.0.conv2.weight
    backbone.bottom_up.res4.0.conv3.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.0.conv3.weight
    backbone.bottom_up.res4.0.shortcut.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.0.shortcut.weight
    backbone.bottom_up.res4.1.conv1.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.1.conv1.weight
    backbone.bottom_up.res4.1.conv2.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.1.conv2.weight
    backbone.bottom_up.res4.1.conv3.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.1.conv3.weight
    backbone.bottom_up.res4.2.conv1.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.2.conv1.weight
    backbone.bottom_up.res4.2.conv2.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.2.conv2.weight
    backbone.bottom_up.res4.2.conv3.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.2.conv3.weight
    backbone.bottom_up.res4.3.conv1.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.3.conv1.weight
    backbone.bottom_up.res4.3.conv2.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.3.conv2.weight
    backbone.bottom_up.res4.3.conv3.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.3.conv3.weight
    backbone.bottom_up.res4.4.conv1.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.4.conv1.weight
    backbone.bottom_up.res4.4.conv2.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.4.conv2.weight
    backbone.bottom_up.res4.4.conv3.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.4.conv3.weight
    backbone.bottom_up.res4.5.conv1.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.5.conv1.weight
    backbone.bottom_up.res4.5.conv2.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.5.conv2.weight
    backbone.bottom_up.res4.5.conv3.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res4.5.conv3.weight
    backbone.bottom_up.res5.0.conv1.weight                                                                                                                                                          [421/1839]
    backbone.bottom_up.res5.0.conv2.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res5.0.conv2.weight
    backbone.bottom_up.res5.0.conv3.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res5.0.conv3.weight
    backbone.bottom_up.res5.0.shortcut.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res5.0.shortcut.weight
    backbone.bottom_up.res5.1.conv1.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res5.1.conv1.weight
    backbone.bottom_up.res5.1.conv2.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res5.1.conv2.weight
    backbone.bottom_up.res5.1.conv3.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res5.1.conv3.weight
    backbone.bottom_up.res5.2.conv1.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res5.2.conv1.weight
    backbone.bottom_up.res5.2.conv2.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res5.2.conv2.weight
    backbone.bottom_up.res5.2.conv3.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.res5.2.conv3.weight
    backbone.bottom_up.stem.conv1.norm.{bias, running_mean, running_var, weight}
    backbone.bottom_up.stem.conv1.weight
    backbone.fpn_lateral2.norm.{bias, running_mean, running_var, weight}
    backbone.fpn_lateral2.weight
    backbone.fpn_lateral3.norm.{bias, running_mean, running_var, weight}
    backbone.fpn_lateral3.weight
    backbone.fpn_lateral4.norm.{bias, running_mean, running_var, weight}
    backbone.fpn_lateral4.weight
    backbone.fpn_lateral5.norm.{bias, running_mean, running_var, weight}
    backbone.fpn_lateral5.weight
    backbone.fpn_output2.norm.{bias, running_mean, running_var, weight}
    backbone.fpn_output2.weight
    backbone.fpn_output3.norm.{bias, running_mean, running_var, weight}
    backbone.fpn_output3.weight
    backbone.fpn_output4.norm.{bias, running_mean, running_var, weight}
    backbone.fpn_output4.weight
    backbone.fpn_output5.norm.{bias, running_mean, running_var, weight}
    backbone.fpn_output5.weight
    proposal_generator.rpn_head.anchor_deltas.{bias, weight}
    proposal_generator.rpn_head.conv.{bias, weight}
    proposal_generator.rpn_head.objectness_logits.{bias, weight}
    roi_heads.box_head.conv1.norm.{bias, running_mean, running_var, weight}
    roi_heads.box_head.conv1.weight
    roi_heads.box_head.conv2.norm.{bias, running_mean, running_var, weight}
    roi_heads.box_head.conv2.weight
    roi_heads.box_head.conv3.norm.{bias, running_mean, running_var, weight}                                                                                                                         [377/1839]
    roi_heads.box_head.conv3.weight
    roi_heads.box_head.conv4.norm.{bias, running_mean, running_var, weight}
    roi_heads.box_head.conv4.weight
    roi_heads.box_head.fc1.{bias, weight}
    roi_heads.box_predictor.bbox_pred.{bias, weight}
    roi_heads.box_predictor.cls_score.{bias, weight}
    WARNING [02/27 20:12:26 fvcore.common.checkpoint]: The checkpoint state_dict contains keys that are not used by the model:
      stem.online_net.0.conv1.weight
      stem.online_net.0.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res2.0.conv1.weight
      online_net.0.res2.0.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res2.0.conv2.weight
      online_net.0.res2.0.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res2.0.conv3.weight
      online_net.0.res2.0.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res2.0.shortcut.weight
      online_net.0.res2.0.shortcut.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res2.1.conv1.weight
      online_net.0.res2.1.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res2.1.conv2.weight
      online_net.0.res2.1.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res2.1.conv3.weight
      online_net.0.res2.1.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res2.2.conv1.weight
      online_net.0.res2.2.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res2.2.conv2.weight
      online_net.0.res2.2.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res2.2.conv3.weight
      online_net.0.res2.2.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res3.0.conv1.weight
      online_net.0.res3.0.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res3.0.conv2.weight
      online_net.0.res3.0.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res3.0.conv3.weight
      online_net.0.res3.0.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res3.0.shortcut.weight
      online_net.0.res3.0.shortcut.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res3.1.conv1.weight
      online_net.0.res3.1.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res3.1.conv2.weight
      online_net.0.res3.1.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res3.1.conv3.weight
      online_net.0.res3.1.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
    online_net.0.res3.2.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}                                                                                                 [332/1839]
      online_net.0.res3.2.conv2.weight
      online_net.0.res3.2.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res3.2.conv3.weight
      online_net.0.res3.2.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res3.3.conv1.weight
      online_net.0.res3.3.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res3.3.conv2.weight
      online_net.0.res3.3.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res3.3.conv3.weight
      online_net.0.res3.3.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res4.0.conv1.weight
      online_net.0.res4.0.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res4.0.conv2.weight
      online_net.0.res4.0.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res4.0.conv3.weight
      online_net.0.res4.0.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res4.0.shortcut.weight
      online_net.0.res4.0.shortcut.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res4.1.conv1.weight
      online_net.0.res4.1.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res4.1.conv2.weight
      online_net.0.res4.1.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res4.1.conv3.weight
      online_net.0.res4.1.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res4.2.conv1.weight
      online_net.0.res4.2.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res4.2.conv2.weight
      online_net.0.res4.2.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res4.2.conv3.weight
      online_net.0.res4.2.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res4.3.conv1.weight
      online_net.0.res4.3.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res4.3.conv2.weight
      online_net.0.res4.3.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res4.3.conv3.weight
      online_net.0.res4.3.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res4.4.conv1.weight
      online_net.0.res4.4.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res4.4.conv2.weight
      online_net.0.res4.4.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res4.4.conv3.weight
      online_net.0.res4.4.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res4.5.conv1.weight
      online_net.0.res4.5.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}                                                                                                 [288/1839]
      online_net.0.res4.5.conv2.weight
      online_net.0.res4.5.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res4.5.conv3.weight
      online_net.0.res4.5.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res5.0.conv1.weight
      online_net.0.res5.0.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res5.0.conv2.weight
      online_net.0.res5.0.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res5.0.conv3.weight
      online_net.0.res5.0.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res5.0.shortcut.weight
      online_net.0.res5.0.shortcut.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res5.1.conv1.weight
      online_net.0.res5.1.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res5.1.conv2.weight
      online_net.0.res5.1.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res5.1.conv3.weight
      online_net.0.res5.1.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res5.2.conv1.weight
      online_net.0.res5.2.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res5.2.conv2.weight
      online_net.0.res5.2.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      online_net.0.res5.2.conv3.weight
      online_net.0.res5.2.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      stem.online_net.1.fc0.{bias, weight}
      stem.online_net.1.bn0.{bias, num_batches_tracked, running_mean, running_var, weight}
      stem.online_net.1.fc1.weight
      stem.target_net.0.conv1.weight
      stem.target_net.0.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res2.0.conv1.weight
      target_net.0.res2.0.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res2.0.conv2.weight
      target_net.0.res2.0.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res2.0.conv3.weight
      target_net.0.res2.0.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res2.0.shortcut.weight
      target_net.0.res2.0.shortcut.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res2.1.conv1.weight
      target_net.0.res2.1.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res2.1.conv2.weight
      target_net.0.res2.1.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res2.1.conv3.weight
      target_net.0.res2.1.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res2.2.conv1.weight                                                                                                                                                              [244/1839]
      target_net.0.res2.2.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res2.2.conv2.weight
      target_net.0.res2.2.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res2.2.conv3.weight
      target_net.0.res2.2.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res3.0.conv1.weight
      target_net.0.res3.0.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res3.0.conv2.weight
      target_net.0.res3.0.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res3.0.conv3.weight
      target_net.0.res3.0.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res3.0.shortcut.weight
      target_net.0.res3.0.shortcut.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res3.1.conv1.weight
      target_net.0.res3.1.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res3.1.conv2.weight
      target_net.0.res3.1.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res3.1.conv3.weight
      target_net.0.res3.1.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res3.2.conv1.weight
      target_net.0.res3.2.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res3.2.conv2.weight
      target_net.0.res3.2.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res3.2.conv3.weight
      target_net.0.res3.2.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res3.3.conv1.weight
      target_net.0.res3.3.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res3.3.conv2.weight
      target_net.0.res3.3.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res3.3.conv3.weight
      target_net.0.res3.3.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.0.conv1.weight
      target_net.0.res4.0.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.0.conv2.weight
      target_net.0.res4.0.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.0.conv3.weight
      target_net.0.res4.0.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.0.shortcut.weight
      target_net.0.res4.0.shortcut.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.1.conv1.weight
      target_net.0.res4.1.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.1.conv2.weight
      target_net.0.res4.1.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.1.conv3.weight                                                                                                                                                              [200/1839]
      target_net.0.res4.1.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.2.conv1.weight
      target_net.0.res4.2.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.2.conv2.weight
      target_net.0.res4.2.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.2.conv3.weight
      target_net.0.res4.2.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.3.conv1.weight
      target_net.0.res4.3.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.3.conv2.weight
      target_net.0.res4.3.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.3.conv3.weight
      target_net.0.res4.3.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.4.conv1.weight
      target_net.0.res4.4.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.4.conv2.weight
      target_net.0.res4.4.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.4.conv3.weight
      target_net.0.res4.4.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.5.conv1.weight
      target_net.0.res4.5.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.5.conv2.weight
      target_net.0.res4.5.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res4.5.conv3.weight
      target_net.0.res4.5.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res5.0.conv1.weight
      target_net.0.res5.0.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res5.0.conv2.weight
      target_net.0.res5.0.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res5.0.conv3.weight
      target_net.0.res5.0.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res5.0.shortcut.weight
      target_net.0.res5.0.shortcut.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res5.1.conv1.weight
      target_net.0.res5.1.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res5.1.conv2.weight
      target_net.0.res5.1.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res5.1.conv3.weight
      target_net.0.res5.1.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res5.2.conv1.weight
      target_net.0.res5.2.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res5.2.conv2.weight
      target_net.0.res5.2.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      target_net.0.res5.2.conv3.weight                                                                                                                                                              [156/1839]
      target_net.0.res5.2.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      stem.target_net.1.fc0.{bias, weight}
      stem.target_net.1.bn0.{bias, num_batches_tracked, running_mean, running_var, weight}
      stem.target_net.1.fc1.weight
      stem.backbone.conv1.weight
      stem.backbone.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      stem.neck.fc0.{bias, weight}
      stem.neck.bn0.{bias, num_batches_tracked, running_mean, running_var, weight}
      stem.neck.fc1.weight
      stem.global_head.predictor.fc0.{bias, weight}
      stem.global_head.predictor.bn0.{bias, num_batches_tracked, running_mean, running_var, weight}
      stem.global_head.predictor.fc1.weight
      stem.local_intra_head.predictor.fc0.{bias, weight}
      stem.local_intra_head.predictor.bn0.{bias, num_batches_tracked, running_mean, running_var, weight}
      stem.local_intra_head.predictor.fc1.weight
      stem.local_inter_head.predictor.fc0.{bias, weight}
      stem.local_inter_head.predictor.bn0.{bias, num_batches_tracked, running_mean, running_var, weight}
      stem.local_inter_head.predictor.fc1.weight
      backbone.res2.0.conv1.weight
      backbone.res2.0.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      backbone.res2.0.conv2.weight
      backbone.res2.0.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      backbone.res2.0.conv3.weight
      backbone.res2.0.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      backbone.res2.0.shortcut.weight
      backbone.res2.0.shortcut.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      backbone.res2.1.conv1.weight
      backbone.res2.1.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      backbone.res2.1.conv2.weight
      backbone.res2.1.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      backbone.res2.1.conv3.weight
      backbone.res2.1.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      backbone.res2.2.conv1.weight
      backbone.res2.2.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      backbone.res2.2.conv2.weight
      backbone.res2.2.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      backbone.res2.2.conv3.weight
      backbone.res2.2.conv3.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      backbone.res3.0.conv1.weight
      backbone.res3.0.conv1.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      backbone.res3.0.conv2.weight
      backbone.res3.0.conv2.norm.{bias, num_batches_tracked, running_mean, running_var, weight}
      backbone.res3.0.conv3.weight
      ...
    
    opened by alejandrosatis 2
  • __init__() got an unexpected keyword argument 'prefetch'

    __init__() got an unexpected keyword argument 'prefetch'

    When train the first stage, I met the error "init() got an unexpected keyword argument 'prefetch'". Have you changed somgthing about the environment ”open-mmlab“?

    opened by jancylee 4
Owner
Jiahao Xie
Jiahao Xie
Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning, CVPR 2021

Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning By Zhenda Xie*, Yutong Lin*, Zheng Zhang, Yue Ca

Zhenda Xie 293 Dec 20, 2022
[AAAI2021] The source code for our paper 《Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion》.

DSM The source code for paper Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion Project Website; Datasets li

Jinpeng Wang 114 Oct 16, 2022
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing

SVM Données Une base d’images contient 490 images pour l’apprentissage (400 voitures et 90 bateaux), et encore 21 images pour fait des tests. Prétrait

Achraf Rahouti 3 Nov 30, 2021
[NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning

SoCo [NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning By Fangyun Wei*, Yue Gao*, Zhirong Wu, Han Hu,

Yue Gao 139 Dec 14, 2022
Object-aware Contrastive Learning for Debiased Scene Representation

Object-aware Contrastive Learning Official PyTorch implementation of "Object-aware Contrastive Learning for Debiased Scene Representation" by Sangwoo

null 43 Dec 14, 2022
Object-aware Contrastive Learning for Debiased Scene Representation

Object-aware Contrastive Learning Official PyTorch implementation of "Object-aware Contrastive Learning for Debiased Scene Representation" by Sangwoo

null 43 Dec 14, 2022
Toward Realistic Single-View 3D Object Reconstruction with Unsupervised Learning from Multiple Images (ICCV 2021)

Table of Content Introduction Getting Started Datasets Installation Experiments Training & Testing Pretrained models Texture fine-tuning Demo Toward R

VinAI Research 42 Dec 5, 2022
Code release for BlockGAN: Learning 3D Object-aware Scene Representations from Unlabelled Images

BlockGAN Code release for BlockGAN: Learning 3D Object-aware Scene Representations from Unlabelled Images BlockGAN: Learning 3D Object-aware Scene Rep

null 41 May 18, 2022
Code for "Primitive Representation Learning for Scene Text Recognition" (CVPR 2021)

Primitive Representation Learning Network (PREN) This repository contains the code for our paper accepted by CVPR 2021 Primitive Representation Learni

Ruijie Yan 76 Jan 2, 2023
Code for CVPR 2021 oral paper "Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts"

Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts The rapid progress in 3D scene understanding has come with growing dem

Facebook Research 182 Dec 30, 2022