Official code of the paper "ReDet: A Rotation-equivariant Detector for Aerial Object Detection" (CVPR 2021)

Overview

ReDet: A Rotation-equivariant Detector for Aerial Object Detection

ReDet: A Rotation-equivariant Detector for Aerial Object Detection (CVPR2021),
Jiaming Han*, Jian Ding*, Nan Xue, Gui-Song Xia,
arXiv preprint (arXiv:2103.07733).

The repo is based on AerialDetection and mmdetection. AerialDetection is a powerful framework for object detection in aerial images, which contains a lot of useful algorithms and tools.

Introduction

Recently, object detection in aerial images has gained much attention in computer vision. Different from objects in natural images, aerial objects are often distributed with arbitrary orientation. Therefore, the detector requires more parameters to encode the orientation information, which are often highly redundant and inefficient. Moreover, as ordinary CNNs do not explicitly model the orientation variation, large amounts of rotation augmented data is needed to train an accurate object detector. In this paper, we propose a Rotation-equivariant Detector (ReDet) to address these issues, which explicitly encodes rotation equivariance and rotation invariance. More precisely, we incorporate rotation-equivariant networks into the detector to extract rotation-equivariant features, which can accurately predict the orientation and lead to a huge reduction of model size. Based on the rotation-equivariant features, we also present Rotation-invariant RoI Align (RiRoI Align), which adaptively extracts rotation-invariant features from equivariant features according to the orientation of RoI. Extensive experiments on several challenging aerial image datasets DOTA-v1.0, DOTA-v1.5 and HRSC2016, show that our method can achieve state-of-the-art performance on the task of aerial object detection. Compared with previous best results, our ReDet gains 1.2, 3.5 and 2.6 mAP on DOTA-v1.0, DOTA-v1.5 and HRSC2016 respectively while reducing the number of parameters by 60% (313 Mb vs. 121 Mb).

Changelog

  • 2021-03-09. Code released.

Benchmark and model zoo

  • ImageNet pretrain

We pretrain our ReResNet on the ImageNet-1K. Related codes can be found at the ReDet_mmcls branch. Here we provide our pretrained ReResNet-50 model for convenience. If you want to train and use ReResNet in your own project, please check out ReDet_mmcls for the installation and basic usage.

Model Group Top-1 (%) Top-5 (%) Download
ReR50 C8 71.20 90.28 model | log
  • Object Detection
Model Data Backbone MS Rotate Lr schd box AP Download
ReDet DOTA-v1.0 ReR50-FPN - - 1x 76.25 cfg model log
ReDet DOTA-v1.0 ReR50-FPN 1x 80.10 cfg model log
ReDet DOTA-v1.5 ReR50-FPN - - 1x 66.86 cfg model log
ReDet DOTA-v1.5 ReR50-FPN 1x 76.80 cfg model log
ReDet HRSC2016 ReR50-FPN - - 3x 90.46 cfg model log

If you cannot get access to Google Drive, BaiduYun download link can be found here with extracting code ABCD.

Installation

Please refer to INSTALL.md for installation and dataset preparation.

Getting Started

Please see GETTING_STARTED.md for the basic usage.

Citation

@inproceedings{han2021ReDet,
  author = {Han, Jiaming and Ding, Jian and Xue, Nan and Xia, Gui-Song},
  title = {ReDet: A Rotation-equivariant Detector for Aerial Object Detection},
  booktitle = {Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR)},
  year = {2021}
}
Comments
  • SyntaxError: future feature annotations is not defined

    SyntaxError: future feature annotations is not defined

    Hi, csuhan. when I tested the code, an error occurred. Traceback (most recent call last): File "tools/test.py", line 12, in from mmdet.apis import init_dist File "/root/code/remote-sensing-detection/ReDet-master/mmdet/apis/init.py", line 2, in from .train import train_detector File "/root/code/remote-sensing-detection/ReDet-master/mmdet/apis/train.py", line 14, in from mmdet.models import RPN File "/root/code/remote-sensing-detection/ReDet-master/mmdet/models/init.py", line 1, in from .backbones import * # noqa: F401,F403 File "/root/code/remote-sensing-detection/ReDet-master/mmdet/models/backbones/init.py", line 2, in from .re_resnet import ReResNet File "/root/code/remote-sensing-detection/ReDet-master/mmdet/models/backbones/re_resnet.py", line 5, in import e2cnn.nn as enn File "/usr/local/anaconda3/envs/pytorch1.4+10.1/lib/python3.6/site-packages/e2cnn-0.1.7-py3.6.egg/e2cnn/init.py", line 17, in from e2cnn import group File "/usr/local/anaconda3/envs/pytorch1.4+10.1/lib/python3.6/site-packages/e2cnn-0.1.7-py3.6.egg/e2cnn/group/init.py", line 4, in from .group import Group File "/usr/local/anaconda3/envs/pytorch1.4+10.1/lib/python3.6/site-packages/e2cnn-0.1.7-py3.6.egg/e2cnn/group/group.py", line 2 from future import annotations ^ SyntaxError: future feature annotations is not defined how can I solve this error?

    opened by Ciao-Z 12
  • ImportError: cannot import name 'get_dataset'

    ImportError: cannot import name 'get_dataset'

    i tried to train the model but got this error

    from mmdet.datasets import get_dataset

    ImportError: cannot import name 'get_dataset'

    i also run sh compile.sh and pip3 install setup.py but error still

    opened by mathshangw 9
  • ModuleNotFoundError: No module named 'ipykernel'

    ModuleNotFoundError: No module named 'ipykernel'

    I tried testing your model but I got error, can you help me with this?

    This is what error I got.

    !python tools/test.py configs/ReDet/ReDet_re50_refpn_1x_dota15.py work_dirs/ReDet_re50_refpn_1x_dota15/ReDet_re50_refpn_1x_dota15-7f2d6dda.pth --out work_dirs/ReDet_re50_refpn_1x_dota15/results.pkl

    Traceback (most recent call last): File "tools/test.py", line 12, in <module> from mmdet.apis import init_dist File "/content/ReDet/mmdet/apis/__init__.py", line 2, in <module> from .train import train_detector File "/content/ReDet/mmdet/apis/train.py", line 10, in <module> from mmdet import datasets File "/content/ReDet/mmdet/datasets/__init__.py", line 1, in <module> from .DOTA import DOTADataset, DOTADataset_v3 File "/content/ReDet/mmdet/datasets/DOTA.py", line 1, in <module> from .coco import CocoDataset File "/content/ReDet/mmdet/datasets/coco.py", line 2, in <module> from pycocotools.coco import COCO File "/usr/local/lib/python3.7/site-packages/pycocotools-2.0.2-py3.7-linux-x86_64.egg/pycocotools/coco.py", line 49, in <module> import matplotlib.pyplot as plt File "/usr/local/lib/python3.7/site-packages/matplotlib-3.4.2-py3.7-linux-x86_64.egg/matplotlib/pyplot.py", line 2500, in <module> switch_backend(rcParams["backend"]) File "/usr/local/lib/python3.7/site-packages/matplotlib-3.4.2-py3.7-linux-x86_64.egg/matplotlib/pyplot.py", line 277, in switch_backend class backend_mod(matplotlib.backend_bases._Backend): File "/usr/local/lib/python3.7/site-packages/matplotlib-3.4.2-py3.7-linux-x86_64.egg/matplotlib/pyplot.py", line 278, in backend_mod locals().update(vars(importlib.import_module(backend_name))) File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) ModuleNotFoundError: No module named 'ipykernel'

    P.S. I am running this in google colab

    opened by chandlerbing65nm 8
  • Training on a new dataset

    Training on a new dataset

    Hai sir,

    I want to train ReDet on VisDrone dataset. For VisDrone dataset, the ground truth object segmentation details are not provided in annotation files.

    The annotation format for VisDrone is like this:

    "<bbox_left>,<bbox_top>,<bbox_width>,<bbox_height>,,<object_category>,,"
    which is different from HRSC and DOTA datasets that have OBB annotations.

    Can I train ReDet on VisDrone only using the BBOX information??

    opened by Sairam13001 6
  • Ablation Studies:  RiRoI Align

    Ablation Studies: RiRoI Align

    Hello, Good day! I hope you are doing well.

    I tried recreating the ablation studies you performed in the paper with DOTA v1.5.

    Specifically, I retrain ReDet using the codes below to verify if the RiRoI align really helps in detecting oriented objects.

    1. w/ RiRoI: roi_layer=dict(type='RoIAlignRotated', out_size=7, sample_num=2),
    2. w/o RiRoI: roi_layer=dict(type='RiRoIAlign', out_size=7, sample_num=2),

    But I get:

    • w/ RiRoI: 0.6677 mAP
    • w/o RiRoI: 0.6695 mAP

    In the published paper, I should get a +0.85 mAP increase.

    I used 1 GPU: NVIDIA V100 and a learning rate of 0.0025. Other parameters are the same with your implementation.

    opened by chandlerbing65nm 6
  • RTX3080 RuntimeError: cublas runtime error : the GPU program failed to execute at /opt/conda/conda-bld/pytorch_1573049310284/work/aten/src/THC/THCBlas.cu:331

    RTX3080 RuntimeError: cublas runtime error : the GPU program failed to execute at /opt/conda/conda-bld/pytorch_1573049310284/work/aten/src/THC/THCBlas.cu:331

    CUDA_VISIBLE_DEVICES=1,2 ./tools/dist_train.sh configs/ReDet/ReDet_re50_refpn_1x_dota1.py 2


    Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.


    ReResNet Orientation: 8 Fix Params: False ReResNet Orientation: 8 Fix Params: False 2021-05-19 21:40:15,266 - INFO - Distributed training: True /opt/conda/conda-bld/pytorch_1573049310284/work/aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. /opt/conda/conda-bld/pytorch_1573049310284/work/aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. /opt/conda/conda-bld/pytorch_1573049310284/work/aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. /opt/conda/conda-bld/pytorch_1573049310284/work/aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. /opt/conda/conda-bld/pytorch_1573049310284/work/aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. /opt/conda/conda-bld/pytorch_1573049310284/work/aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. 2021-05-19 21:40:42,380 - INFO - load model from: /home/neo/desktop/ReDet/tools/work_dirs/ReResNet_pretrain/re_resnet50_c8_batch256-12933bc2.pth loading annotations into memory... 2021-05-19 21:40:42,542 - WARNING - The model and loaded state dict do not match exactly

    unexpected key in source state_dict: backbone.conv1.weights, backbone.conv1.basisexpansion.block_expansion('irrep_0', 'regular').sampled_basis, backbone.bn1.indices_8, backbone.bn1.batch_norm_[8].weight, backbone.bn1.batch_norm_[8].bias, backbone.bn1.batch_norm_[8].running_mean, backbone.bn1.batch_norm_[8].running_var, backbone.bn1.batch_norm_[8].num_batches_tracked, backbone.layer1.0.conv1.weights, backbone.layer1.0.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer1.0.bn1.indices_8, backbone.layer1.0.bn1.batch_norm_[8].weight, backbone.layer1.0.bn1.batch_norm_[8].bias, backbone.layer1.0.bn1.batch_norm_[8].running_mean, backbone.layer1.0.bn1.batch_norm_[8].running_var, backbone.layer1.0.bn1.batch_norm_[8].num_batches_tracked, backbone.layer1.0.conv2.weights, backbone.layer1.0.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer1.0.bn2.indices_8, backbone.layer1.0.bn2.batch_norm_[8].weight, backbone.layer1.0.bn2.batch_norm_[8].bias, backbone.layer1.0.bn2.batch_norm_[8].running_mean, backbone.layer1.0.bn2.batch_norm_[8].running_var, backbone.layer1.0.bn2.batch_norm_[8].num_batches_tracked, backbone.layer1.0.conv3.weights, backbone.layer1.0.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer1.0.bn3.indices_8, backbone.layer1.0.bn3.batch_norm_[8].weight, backbone.layer1.0.bn3.batch_norm_[8].bias, backbone.layer1.0.bn3.batch_norm_[8].running_mean, backbone.layer1.0.bn3.batch_norm_[8].running_var, backbone.layer1.0.bn3.batch_norm_[8].num_batches_tracked, backbone.layer1.0.downsample.0.weights, backbone.layer1.0.downsample.0.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer1.0.downsample.1.indices_8, backbone.layer1.0.downsample.1.batch_norm_[8].weight, backbone.layer1.0.downsample.1.batch_norm_[8].bias, backbone.layer1.0.downsample.1.batch_norm_[8].running_mean, backbone.layer1.0.downsample.1.batch_norm_[8].running_var, backbone.layer1.0.downsample.1.batch_norm_[8].num_batches_tracked, backbone.layer1.1.conv1.weights, backbone.layer1.1.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer1.1.bn1.indices_8, backbone.layer1.1.bn1.batch_norm_[8].weight, backbone.layer1.1.bn1.batch_norm_[8].bias, backbone.layer1.1.bn1.batch_norm_[8].running_mean, backbone.layer1.1.bn1.batch_norm_[8].running_var, backbone.layer1.1.bn1.batch_norm_[8].num_batches_tracked, backbone.layer1.1.conv2.weights, backbone.layer1.1.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer1.1.bn2.indices_8, backbone.layer1.1.bn2.batch_norm_[8].weight, backbone.layer1.1.bn2.batch_norm_[8].bias, backbone.layer1.1.bn2.batch_norm_[8].running_mean, backbone.layer1.1.bn2.batch_norm_[8].running_var, backbone.layer1.1.bn2.batch_norm_[8].num_batches_tracked, backbone.layer1.1.conv3.weights, backbone.layer1.1.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer1.1.bn3.indices_8, backbone.layer1.1.bn3.batch_norm_[8].weight, backbone.layer1.1.bn3.batch_norm_[8].bias, backbone.layer1.1.bn3.batch_norm_[8].running_mean, backbone.layer1.1.bn3.batch_norm_[8].running_var, backbone.layer1.1.bn3.batch_norm_[8].num_batches_tracked, backbone.layer1.2.conv1.weights, backbone.layer1.2.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer1.2.bn1.indices_8, backbone.layer1.2.bn1.batch_norm_[8].weight, backbone.layer1.2.bn1.batch_norm_[8].bias, backbone.layer1.2.bn1.batch_norm_[8].running_mean, backbone.layer1.2.bn1.batch_norm_[8].running_var, backbone.layer1.2.bn1.batch_norm_[8].num_batches_tracked, backbone.layer1.2.conv2.weights, backbone.layer1.2.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer1.2.bn2.indices_8, backbone.layer1.2.bn2.batch_norm_[8].weight, backbone.layer1.2.bn2.batch_norm_[8].bias, backbone.layer1.2.bn2.batch_norm_[8].running_mean, backbone.layer1.2.bn2.batch_norm_[8].running_var, backbone.layer1.2.bn2.batch_norm_[8].num_batches_tracked, backbone.layer1.2.conv3.weights, backbone.layer1.2.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer1.2.bn3.indices_8, backbone.layer1.2.bn3.batch_norm_[8].weight, backbone.layer1.2.bn3.batch_norm_[8].bias, backbone.layer1.2.bn3.batch_norm_[8].running_mean, backbone.layer1.2.bn3.batch_norm_[8].running_var, backbone.layer1.2.bn3.batch_norm_[8].num_batches_tracked, backbone.layer2.0.conv1.weights, backbone.layer2.0.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer2.0.bn1.indices_8, backbone.layer2.0.bn1.batch_norm_[8].weight, backbone.layer2.0.bn1.batch_norm_[8].bias, backbone.layer2.0.bn1.batch_norm_[8].running_mean, backbone.layer2.0.bn1.batch_norm_[8].running_var, backbone.layer2.0.bn1.batch_norm_[8].num_batches_tracked, backbone.layer2.0.conv2.weights, backbone.layer2.0.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer2.0.bn2.indices_8, backbone.layer2.0.bn2.batch_norm_[8].weight, backbone.layer2.0.bn2.batch_norm_[8].bias, backbone.layer2.0.bn2.batch_norm_[8].running_mean, backbone.layer2.0.bn2.batch_norm_[8].running_var, backbone.layer2.0.bn2.batch_norm_[8].num_batches_tracked, backbone.layer2.0.conv3.weights, backbone.layer2.0.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer2.0.bn3.indices_8, backbone.layer2.0.bn3.batch_norm_[8].weight, backbone.layer2.0.bn3.batch_norm_[8].bias, backbone.layer2.0.bn3.batch_norm_[8].running_mean, backbone.layer2.0.bn3.batch_norm_[8].running_var, backbone.layer2.0.bn3.batch_norm_[8].num_batches_tracked, backbone.layer2.0.downsample.0.weights, backbone.layer2.0.downsample.0.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer2.0.downsample.1.indices_8, backbone.layer2.0.downsample.1.batch_norm_[8].weight, backbone.layer2.0.downsample.1.batch_norm_[8].bias, backbone.layer2.0.downsample.1.batch_norm_[8].running_mean, backbone.layer2.0.downsample.1.batch_norm_[8].running_var, backbone.layer2.0.downsample.1.batch_norm_[8].num_batches_tracked, backbone.layer2.1.conv1.weights, backbone.layer2.1.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer2.1.bn1.indices_8, backbone.layer2.1.bn1.batch_norm_[8].weight, backbone.layer2.1.bn1.batch_norm_[8].bias, backbone.layer2.1.bn1.batch_norm_[8].running_mean, backbone.layer2.1.bn1.batch_norm_[8].running_var, backbone.layer2.1.bn1.batch_norm_[8].num_batches_tracked, backbone.layer2.1.conv2.weights, backbone.layer2.1.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer2.1.bn2.indices_8, backbone.layer2.1.bn2.batch_norm_[8].weight, backbone.layer2.1.bn2.batch_norm_[8].bias, backbone.layer2.1.bn2.batch_norm_[8].running_mean, backbone.layer2.1.bn2.batch_norm_[8].running_var, backbone.layer2.1.bn2.batch_norm_[8].num_batches_tracked, backbone.layer2.1.conv3.weights, backbone.layer2.1.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer2.1.bn3.indices_8, backbone.layer2.1.bn3.batch_norm_[8].weight, backbone.layer2.1.bn3.batch_norm_[8].bias, backbone.layer2.1.bn3.batch_norm_[8].running_mean, backbone.layer2.1.bn3.batch_norm_[8].running_var, backbone.layer2.1.bn3.batch_norm_[8].num_batches_tracked, backbone.layer2.2.conv1.weights, backbone.layer2.2.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer2.2.bn1.indices_8, backbone.layer2.2.bn1.batch_norm_[8].weight, backbone.layer2.2.bn1.batch_norm_[8].bias, backbone.layer2.2.bn1.batch_norm_[8].running_mean, backbone.layer2.2.bn1.batch_norm_[8].running_var, backbone.layer2.2.bn1.batch_norm_[8].num_batches_tracked, backbone.layer2.2.conv2.weights, backbone.layer2.2.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer2.2.bn2.indices_8, backbone.layer2.2.bn2.batch_norm_[8].weight, backbone.layer2.2.bn2.batch_norm_[8].bias, backbone.layer2.2.bn2.batch_norm_[8].running_mean, backbone.layer2.2.bn2.batch_norm_[8].running_var, backbone.layer2.2.bn2.batch_norm_[8].num_batches_tracked, backbone.layer2.2.conv3.weights, backbone.layer2.2.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer2.2.bn3.indices_8, backbone.layer2.2.bn3.batch_norm_[8].weight, backbone.layer2.2.bn3.batch_norm_[8].bias, backbone.layer2.2.bn3.batch_norm_[8].running_mean, backbone.layer2.2.bn3.batch_norm_[8].running_var, backbone.layer2.2.bn3.batch_norm_[8].num_batches_tracked, backbone.layer2.3.conv1.weights, backbone.layer2.3.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer2.3.bn1.indices_8, backbone.layer2.3.bn1.batch_norm_[8].weight, backbone.layer2.3.bn1.batch_norm_[8].bias, backbone.layer2.3.bn1.batch_norm_[8].running_mean, backbone.layer2.3.bn1.batch_norm_[8].running_var, backbone.layer2.3.bn1.batch_norm_[8].num_batches_tracked, backbone.layer2.3.conv2.weights, backbone.layer2.3.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer2.3.bn2.indices_8, backbone.layer2.3.bn2.batch_norm_[8].weight, backbone.layer2.3.bn2.batch_norm_[8].bias, backbone.layer2.3.bn2.batch_norm_[8].running_mean, backbone.layer2.3.bn2.batch_norm_[8].running_var, backbone.layer2.3.bn2.batch_norm_[8].num_batches_tracked, backbone.layer2.3.conv3.weights, backbone.layer2.3.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer2.3.bn3.indices_8, backbone.layer2.3.bn3.batch_norm_[8].weight, backbone.layer2.3.bn3.batch_norm_[8].bias, backbone.layer2.3.bn3.batch_norm_[8].running_mean, backbone.layer2.3.bn3.batch_norm_[8].running_var, backbone.layer2.3.bn3.batch_norm_[8].num_batches_tracked, backbone.layer3.0.conv1.weights, backbone.layer3.0.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.0.bn1.indices_8, backbone.layer3.0.bn1.batch_norm_[8].weight, backbone.layer3.0.bn1.batch_norm_[8].bias, backbone.layer3.0.bn1.batch_norm_[8].running_mean, backbone.layer3.0.bn1.batch_norm_[8].running_var, backbone.layer3.0.bn1.batch_norm_[8].num_batches_tracked, backbone.layer3.0.conv2.weights, backbone.layer3.0.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.0.bn2.indices_8, backbone.layer3.0.bn2.batch_norm_[8].weight, backbone.layer3.0.bn2.batch_norm_[8].bias, backbone.layer3.0.bn2.batch_norm_[8].running_mean, backbone.layer3.0.bn2.batch_norm_[8].running_var, backbone.layer3.0.bn2.batch_norm_[8].num_batches_tracked, backbone.layer3.0.conv3.weights, backbone.layer3.0.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.0.bn3.indices_8, backbone.layer3.0.bn3.batch_norm_[8].weight, backbone.layer3.0.bn3.batch_norm_[8].bias, backbone.layer3.0.bn3.batch_norm_[8].running_mean, backbone.layer3.0.bn3.batch_norm_[8].running_var, backbone.layer3.0.bn3.batch_norm_[8].num_batches_tracked, backbone.layer3.0.downsample.0.weights, backbone.layer3.0.downsample.0.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.0.downsample.1.indices_8, backbone.layer3.0.downsample.1.batch_norm_[8].weight, backbone.layer3.0.downsample.1.batch_norm_[8].bias, backbone.layer3.0.downsample.1.batch_norm_[8].running_mean, backbone.layer3.0.downsample.1.batch_norm_[8].running_var, backbone.layer3.0.downsample.1.batch_norm_[8].num_batches_tracked, backbone.layer3.1.conv1.weights, backbone.layer3.1.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.1.bn1.indices_8, backbone.layer3.1.bn1.batch_norm_[8].weight, backbone.layer3.1.bn1.batch_norm_[8].bias, backbone.layer3.1.bn1.batch_norm_[8].running_mean, backbone.layer3.1.bn1.batch_norm_[8].running_var, backbone.layer3.1.bn1.batch_norm_[8].num_batches_tracked, backbone.layer3.1.conv2.weights, backbone.layer3.1.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.1.bn2.indices_8, backbone.layer3.1.bn2.batch_norm_[8].weight, backbone.layer3.1.bn2.batch_norm_[8].bias, backbone.layer3.1.bn2.batch_norm_[8].running_mean, backbone.layer3.1.bn2.batch_norm_[8].running_var, backbone.layer3.1.bn2.batch_norm_[8].num_batches_tracked, backbone.layer3.1.conv3.weights, backbone.layer3.1.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.1.bn3.indices_8, backbone.layer3.1.bn3.batch_norm_[8].weight, backbone.layer3.1.bn3.batch_norm_[8].bias, backbone.layer3.1.bn3.batch_norm_[8].running_mean, backbone.layer3.1.bn3.batch_norm_[8].running_var, backbone.layer3.1.bn3.batch_norm_[8].num_batches_tracked, backbone.layer3.2.conv1.weights, backbone.layer3.2.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.2.bn1.indices_8, backbone.layer3.2.bn1.batch_norm_[8].weight, backbone.layer3.2.bn1.batch_norm_[8].bias, backbone.layer3.2.bn1.batch_norm_[8].running_mean, backbone.layer3.2.bn1.batch_norm_[8].running_var, backbone.layer3.2.bn1.batch_norm_[8].num_batches_tracked, backbone.layer3.2.conv2.weights, backbone.layer3.2.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.2.bn2.indices_8, backbone.layer3.2.bn2.batch_norm_[8].weight, backbone.layer3.2.bn2.batch_norm_[8].bias, backbone.layer3.2.bn2.batch_norm_[8].running_mean, backbone.layer3.2.bn2.batch_norm_[8].running_var, backbone.layer3.2.bn2.batch_norm_[8].num_batches_tracked, backbone.layer3.2.conv3.weights, backbone.layer3.2.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.2.bn3.indices_8, backbone.layer3.2.bn3.batch_norm_[8].weight, backbone.layer3.2.bn3.batch_norm_[8].bias, backbone.layer3.2.bn3.batch_norm_[8].running_mean, backbone.layer3.2.bn3.batch_norm_[8].running_var, backbone.layer3.2.bn3.batch_norm_[8].num_batches_tracked, backbone.layer3.3.conv1.weights, backbone.layer3.3.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.3.bn1.indices_8, backbone.layer3.3.bn1.batch_norm_[8].weight, backbone.layer3.3.bn1.batch_norm_[8].bias, backbone.layer3.3.bn1.batch_norm_[8].running_mean, backbone.layer3.3.bn1.batch_norm_[8].running_var, backbone.layer3.3.bn1.batch_norm_[8].num_batches_tracked, backbone.layer3.3.conv2.weights, backbone.layer3.3.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.3.bn2.indices_8, backbone.layer3.3.bn2.batch_norm_[8].weight, backbone.layer3.3.bn2.batch_norm_[8].bias, backbone.layer3.3.bn2.batch_norm_[8].running_mean, backbone.layer3.3.bn2.batch_norm_[8].running_var, backbone.layer3.3.bn2.batch_norm_[8].num_batches_tracked, backbone.layer3.3.conv3.weights, backbone.layer3.3.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.3.bn3.indices_8, backbone.layer3.3.bn3.batch_norm_[8].weight, backbone.layer3.3.bn3.batch_norm_[8].bias, backbone.layer3.3.bn3.batch_norm_[8].running_mean, backbone.layer3.3.bn3.batch_norm_[8].running_var, backbone.layer3.3.bn3.batch_norm_[8].num_batches_tracked, backbone.layer3.4.conv1.weights, backbone.layer3.4.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.4.bn1.indices_8, backbone.layer3.4.bn1.batch_norm_[8].weight, backbone.layer3.4.bn1.batch_norm_[8].bias, backbone.layer3.4.bn1.batch_norm_[8].running_mean, backbone.layer3.4.bn1.batch_norm_[8].running_var, backbone.layer3.4.bn1.batch_norm_[8].num_batches_tracked, backbone.layer3.4.conv2.weights, backbone.layer3.4.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.4.bn2.indices_8, backbone.layer3.4.bn2.batch_norm_[8].weight, backbone.layer3.4.bn2.batch_norm_[8].bias, backbone.layer3.4.bn2.batch_norm_[8].running_mean, backbone.layer3.4.bn2.batch_norm_[8].running_var, backbone.layer3.4.bn2.batch_norm_[8].num_batches_tracked, backbone.layer3.4.conv3.weights, backbone.layer3.4.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.4.bn3.indices_8, backbone.layer3.4.bn3.batch_norm_[8].weight, backbone.layer3.4.bn3.batch_norm_[8].bias, backbone.layer3.4.bn3.batch_norm_[8].running_mean, backbone.layer3.4.bn3.batch_norm_[8].running_var, backbone.layer3.4.bn3.batch_norm_[8].num_batches_tracked, backbone.layer3.5.conv1.weights, backbone.layer3.5.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.5.bn1.indices_8, backbone.layer3.5.bn1.batch_norm_[8].weight, backbone.layer3.5.bn1.batch_norm_[8].bias, backbone.layer3.5.bn1.batch_norm_[8].running_mean, backbone.layer3.5.bn1.batch_norm_[8].running_var, backbone.layer3.5.bn1.batch_norm_[8].num_batches_tracked, backbone.layer3.5.conv2.weights, backbone.layer3.5.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.5.bn2.indices_8, backbone.layer3.5.bn2.batch_norm_[8].weight, backbone.layer3.5.bn2.batch_norm_[8].bias, backbone.layer3.5.bn2.batch_norm_[8].running_mean, backbone.layer3.5.bn2.batch_norm_[8].running_var, backbone.layer3.5.bn2.batch_norm_[8].num_batches_tracked, backbone.layer3.5.conv3.weights, backbone.layer3.5.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer3.5.bn3.indices_8, backbone.layer3.5.bn3.batch_norm_[8].weight, backbone.layer3.5.bn3.batch_norm_[8].bias, backbone.layer3.5.bn3.batch_norm_[8].running_mean, backbone.layer3.5.bn3.batch_norm_[8].running_var, backbone.layer3.5.bn3.batch_norm_[8].num_batches_tracked, backbone.layer4.0.conv1.weights, backbone.layer4.0.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer4.0.bn1.indices_8, backbone.layer4.0.bn1.batch_norm_[8].weight, backbone.layer4.0.bn1.batch_norm_[8].bias, backbone.layer4.0.bn1.batch_norm_[8].running_mean, backbone.layer4.0.bn1.batch_norm_[8].running_var, backbone.layer4.0.bn1.batch_norm_[8].num_batches_tracked, backbone.layer4.0.conv2.weights, backbone.layer4.0.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer4.0.bn2.indices_8, backbone.layer4.0.bn2.batch_norm_[8].weight, backbone.layer4.0.bn2.batch_norm_[8].bias, backbone.layer4.0.bn2.batch_norm_[8].running_mean, backbone.layer4.0.bn2.batch_norm_[8].running_var, backbone.layer4.0.bn2.batch_norm_[8].num_batches_tracked, backbone.layer4.0.conv3.weights, backbone.layer4.0.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer4.0.bn3.indices_8, backbone.layer4.0.bn3.batch_norm_[8].weight, backbone.layer4.0.bn3.batch_norm_[8].bias, backbone.layer4.0.bn3.batch_norm_[8].running_mean, backbone.layer4.0.bn3.batch_norm_[8].running_var, backbone.layer4.0.bn3.batch_norm_[8].num_batches_tracked, backbone.layer4.0.downsample.0.weights, backbone.layer4.0.downsample.0.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer4.0.downsample.1.indices_8, backbone.layer4.0.downsample.1.batch_norm_[8].weight, backbone.layer4.0.downsample.1.batch_norm_[8].bias, backbone.layer4.0.downsample.1.batch_norm_[8].running_mean, backbone.layer4.0.downsample.1.batch_norm_[8].running_var, backbone.layer4.0.downsample.1.batch_norm_[8].num_batches_tracked, backbone.layer4.1.conv1.weights, backbone.layer4.1.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer4.1.bn1.indices_8, backbone.layer4.1.bn1.batch_norm_[8].weight, backbone.layer4.1.bn1.batch_norm_[8].bias, backbone.layer4.1.bn1.batch_norm_[8].running_mean, backbone.layer4.1.bn1.batch_norm_[8].running_var, backbone.layer4.1.bn1.batch_norm_[8].num_batches_tracked, backbone.layer4.1.conv2.weights, backbone.layer4.1.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer4.1.bn2.indices_8, backbone.layer4.1.bn2.batch_norm_[8].weight, backbone.layer4.1.bn2.batch_norm_[8].bias, backbone.layer4.1.bn2.batch_norm_[8].running_mean, backbone.layer4.1.bn2.batch_norm_[8].running_var, backbone.layer4.1.bn2.batch_norm_[8].num_batches_tracked, backbone.layer4.1.conv3.weights, backbone.layer4.1.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer4.1.bn3.indices_8, backbone.layer4.1.bn3.batch_norm_[8].weight, backbone.layer4.1.bn3.batch_norm_[8].bias, backbone.layer4.1.bn3.batch_norm_[8].running_mean, backbone.layer4.1.bn3.batch_norm_[8].running_var, backbone.layer4.1.bn3.batch_norm_[8].num_batches_tracked, backbone.layer4.2.conv1.weights, backbone.layer4.2.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer4.2.bn1.indices_8, backbone.layer4.2.bn1.batch_norm_[8].weight, backbone.layer4.2.bn1.batch_norm_[8].bias, backbone.layer4.2.bn1.batch_norm_[8].running_mean, backbone.layer4.2.bn1.batch_norm_[8].running_var, backbone.layer4.2.bn1.batch_norm_[8].num_batches_tracked, backbone.layer4.2.conv2.weights, backbone.layer4.2.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer4.2.bn2.indices_8, backbone.layer4.2.bn2.batch_norm_[8].weight, backbone.layer4.2.bn2.batch_norm_[8].bias, backbone.layer4.2.bn2.batch_norm_[8].running_mean, backbone.layer4.2.bn2.batch_norm_[8].running_var, backbone.layer4.2.bn2.batch_norm_[8].num_batches_tracked, backbone.layer4.2.conv3.weights, backbone.layer4.2.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, backbone.layer4.2.bn3.indices_8, backbone.layer4.2.bn3.batch_norm_[8].weight, backbone.layer4.2.bn3.batch_norm_[8].bias, backbone.layer4.2.bn3.batch_norm_[8].running_mean, backbone.layer4.2.bn3.batch_norm_[8].running_var, backbone.layer4.2.bn3.batch_norm_[8].num_batches_tracked, head.fc.weight, head.fc.bias

    missing keys in source state_dict: layer3.2.conv1.filter, layer1.0.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer3.2.bn3.batch_norm_[8].running_mean, layer1.1.bn2.indices_8, layer1.0.conv1.filter, layer3.1.conv3.weights, layer1.0.bn1.batch_norm_[8].bias, layer4.0.bn3.batch_norm_[8].running_var, layer4.0.downsample.1.indices_8, layer3.4.bn2.batch_norm_[8].bias, layer1.0.downsample.0.weights, layer4.1.bn2.indices_8, layer3.4.bn3.batch_norm_[8].running_var, layer4.0.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer3.3.bn2.batch_norm_[8].running_mean, layer3.3.bn1.indices_8, layer2.1.bn2.batch_norm_[8].running_var, layer3.3.conv3.weights, layer3.4.bn3.batch_norm_[8].bias, layer1.1.conv1.weights, conv1.basisexpansion.block_expansion('irrep_0', 'regular').sampled_basis, layer2.3.conv2.weights, layer4.0.downsample.1.batch_norm_[8].running_var, layer1.0.bn3.batch_norm_[8].bias, layer3.2.conv2.filter, layer2.1.bn2.batch_norm_[8].bias, layer4.2.bn2.batch_norm_[8].running_mean, layer3.1.bn1.batch_norm_[8].bias, layer3.4.bn2.indices_8, layer3.4.conv2.weights, layer1.1.conv3.filter, layer2.3.bn1.batch_norm_[8].weight, layer1.0.bn2.batch_norm_[8].weight, layer3.2.bn2.batch_norm_[8].weight, layer3.5.bn1.indices_8, layer4.2.bn1.batch_norm_[8].bias, layer4.2.conv2.filter, layer4.1.bn1.batch_norm_[8].bias, layer3.4.bn1.indices_8, layer2.3.conv3.weights, layer2.1.bn3.batch_norm_[8].bias, layer1.0.bn3.batch_norm_[8].running_var, layer2.0.downsample.1.batch_norm_[8].bias, layer2.3.bn2.indices_8, layer4.1.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer2.0.downsample.0.weights, layer2.3.bn2.batch_norm_[8].running_var, layer2.2.conv3.weights, layer3.0.downsample.1.batch_norm_[8].running_var, layer2.0.bn2.batch_norm_[8].bias, layer3.0.downsample.1.batch_norm_[8].bias, layer3.3.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer2.2.bn2.batch_norm_[8].running_var, layer3.3.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer4.1.conv3.weights, layer2.1.conv3.weights, layer2.2.bn2.batch_norm_[8].weight, layer3.0.bn1.indices_8, layer1.0.bn2.indices_8, layer1.1.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer4.1.bn1.batch_norm_[8].running_var, layer3.5.conv2.weights, layer3.0.bn3.batch_norm_[8].weight, layer4.0.bn2.batch_norm_[8].weight, layer2.2.conv2.weights, layer1.1.bn1.indices_8, layer3.4.bn2.batch_norm_[8].weight, layer4.2.conv1.filter, layer2.2.bn3.batch_norm_[8].weight, layer1.2.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer2.0.bn1.batch_norm_[8].weight, layer4.2.bn1.batch_norm_[8].running_var, layer3.4.bn1.batch_norm_[8].bias, layer2.3.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer3.3.conv1.weights, layer3.0.conv1.weights, layer2.0.downsample.0.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer2.1.bn1.batch_norm_[8].weight, layer3.5.conv3.filter, layer4.2.bn3.batch_norm_[8].running_var, bn1.batch_norm_[8].bias, layer2.1.bn2.batch_norm_[8].running_mean, layer3.5.conv2.filter, layer1.0.bn2.batch_norm_[8].running_var, bn1.batch_norm_[8].weight, layer3.0.bn2.batch_norm_[8].bias, layer1.1.conv1.filter, layer3.2.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer3.1.bn2.batch_norm_[8].bias, bn1.batch_norm_[8].running_mean, layer1.0.bn1.batch_norm_[8].weight, layer1.1.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer3.0.conv3.filter, layer2.0.bn2.indices_8, layer1.1.bn1.batch_norm_[8].running_mean, layer3.1.bn2.batch_norm_[8].weight, layer3.4.bn2.batch_norm_[8].running_mean, layer1.0.downsample.1.indices_8, layer1.2.conv2.weights, layer1.2.conv2.filter, layer3.5.bn2.batch_norm_[8].running_mean, layer3.5.bn3.batch_norm_[8].weight, layer3.2.bn2.batch_norm_[8].bias, layer1.0.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer3.2.bn2.indices_8, layer3.4.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer2.2.bn2.batch_norm_[8].running_mean, layer1.2.conv3.weights, layer3.1.conv3.filter, layer4.2.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer2.0.conv2.filter, layer4.1.bn1.indices_8, layer1.1.bn2.batch_norm_[8].bias, layer2.0.bn1.batch_norm_[8].running_var, layer3.0.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, bn1.batch_norm_[8].running_var, layer1.1.bn3.batch_norm_[8].running_var, layer3.4.conv1.weights, layer3.4.conv1.filter, layer2.3.conv1.filter, layer3.3.bn2.batch_norm_[8].bias, layer3.3.conv2.filter, layer1.0.conv2.weights, layer3.2.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer2.3.conv3.filter, layer3.5.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer3.5.bn3.batch_norm_[8].running_var, layer1.2.bn2.indices_8, layer4.0.bn3.batch_norm_[8].weight, layer2.0.bn3.batch_norm_[8].bias, layer2.3.bn2.batch_norm_[8].bias, layer3.0.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer3.4.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer1.0.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer1.2.bn3.batch_norm_[8].running_var, layer3.2.bn1.batch_norm_[8].running_mean, layer2.0.downsample.1.indices_8, layer3.3.bn3.indices_8, layer1.1.bn3.batch_norm_[8].running_mean, layer3.2.bn1.batch_norm_[8].bias, layer1.0.bn2.batch_norm_[8].running_mean, layer3.3.conv3.filter, layer3.0.bn2.indices_8, layer4.0.conv3.weights, layer4.1.bn2.batch_norm_[8].running_mean, layer2.1.bn2.batch_norm_[8].weight, layer4.0.downsample.1.batch_norm_[8].running_mean, layer1.0.downsample.1.batch_norm_[8].running_mean, layer3.4.bn3.batch_norm_[8].weight, layer4.1.conv3.filter, layer3.0.downsample.1.batch_norm_[8].weight, layer2.2.bn3.batch_norm_[8].running_var, layer1.2.conv3.filter, layer3.1.bn3.batch_norm_[8].bias, layer3.3.bn1.batch_norm_[8].running_var, layer1.0.downsample.0.filter, layer2.0.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer2.0.bn1.batch_norm_[8].bias, layer3.0.downsample.1.indices_8, layer3.2.bn1.indices_8, layer3.3.bn3.batch_norm_[8].running_var, layer2.0.bn2.batch_norm_[8].running_var, layer1.1.bn1.batch_norm_[8].weight, layer1.1.conv3.weights, layer2.1.conv1.weights, layer4.1.bn3.batch_norm_[8].weight, layer1.0.bn1.batch_norm_[8].running_var, layer3.1.bn2.batch_norm_[8].running_mean, layer4.0.conv1.weights, layer4.1.bn1.batch_norm_[8].weight, layer3.4.bn3.batch_norm_[8].running_mean, layer2.2.bn2.batch_norm_[8].bias, layer3.0.conv3.weights, layer2.1.conv2.filter, layer2.1.bn1.batch_norm_[8].running_var, layer2.1.conv3.filter, layer1.2.bn2.batch_norm_[8].running_mean, layer2.0.bn1.indices_8, layer4.0.bn2.batch_norm_[8].running_var, layer2.0.downsample.1.batch_norm_[8].running_mean, layer1.1.bn2.batch_norm_[8].running_mean, layer3.5.bn1.batch_norm_[8].running_var, layer4.0.bn1.batch_norm_[8].bias, layer3.2.conv1.weights, layer4.1.conv2.weights, layer2.0.conv1.filter, layer2.1.bn1.batch_norm_[8].running_mean, layer3.0.downsample.1.batch_norm_[8].running_mean, layer3.1.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer2.0.conv1.weights, layer2.1.bn3.batch_norm_[8].running_mean, layer2.0.bn3.batch_norm_[8].weight, layer3.0.bn1.batch_norm_[8].bias, layer1.0.bn1.batch_norm_[8].running_mean, layer3.2.bn2.batch_norm_[8].running_mean, layer3.4.bn1.batch_norm_[8].running_var, layer3.2.bn3.batch_norm_[8].bias, layer2.3.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer3.0.bn3.batch_norm_[8].running_var, layer4.1.conv2.filter, layer3.0.downsample.0.weights, layer3.5.bn1.batch_norm_[8].bias, layer3.0.conv1.filter, layer4.1.conv1.weights, layer1.1.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer2.2.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer2.3.bn3.batch_norm_[8].running_var, layer4.2.conv3.weights, layer2.3.bn1.batch_norm_[8].running_mean, layer2.1.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer1.1.bn3.batch_norm_[8].weight, layer4.0.conv2.weights, layer2.1.bn1.indices_8, layer2.2.bn1.batch_norm_[8].bias, layer2.0.bn2.batch_norm_[8].weight, layer1.2.conv1.weights, layer4.1.bn3.batch_norm_[8].bias, layer3.3.bn3.batch_norm_[8].bias, layer4.0.bn2.indices_8, layer4.0.downsample.1.batch_norm_[8].weight, layer2.3.bn1.batch_norm_[8].bias, layer2.3.bn2.batch_norm_[8].weight, layer3.2.bn1.batch_norm_[8].running_var, layer3.0.bn3.indices_8, layer4.0.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer3.0.bn1.batch_norm_[8].running_var, layer1.0.downsample.0.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer1.1.bn2.batch_norm_[8].weight, layer4.0.downsample.0.weights, layer1.1.bn2.batch_norm_[8].running_var, layer3.5.bn2.batch_norm_[8].bias, layer1.1.conv2.weights, layer3.4.conv3.filter, layer4.0.bn1.batch_norm_[8].running_mean, layer4.1.conv1.filter, layer3.2.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer4.1.bn3.batch_norm_[8].running_var, layer3.1.conv1.filter, layer3.5.bn2.batch_norm_[8].running_var, layer1.1.bn3.batch_norm_[8].bias, layer4.1.bn2.batch_norm_[8].bias, layer1.2.bn1.batch_norm_[8].bias, layer3.2.bn1.batch_norm_[8].weight, layer3.4.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer1.2.bn1.batch_norm_[8].running_var, layer3.3.conv1.filter, layer3.5.conv1.weights, layer3.3.bn2.batch_norm_[8].weight, layer3.0.downsample.0.filter, layer3.5.bn3.batch_norm_[8].running_mean, layer4.0.bn3.batch_norm_[8].bias, layer4.0.conv1.filter, layer4.2.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer1.0.bn3.batch_norm_[8].running_mean, layer3.2.conv3.filter, layer3.4.bn2.batch_norm_[8].running_var, layer1.2.bn3.batch_norm_[8].bias, layer1.1.bn1.batch_norm_[8].bias, layer3.2.bn3.batch_norm_[8].weight, layer3.2.conv2.weights, layer3.1.bn1.indices_8, layer2.0.bn3.batch_norm_[8].running_var, layer1.2.bn2.batch_norm_[8].weight, layer4.0.bn2.batch_norm_[8].bias, layer2.0.bn3.batch_norm_[8].running_mean, layer4.2.bn2.batch_norm_[8].bias, layer3.5.bn3.indices_8, layer4.2.bn1.indices_8, layer2.1.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer3.4.bn1.batch_norm_[8].weight, layer4.2.conv1.weights, layer2.1.bn1.batch_norm_[8].bias, layer1.1.conv2.filter, layer2.1.bn3.batch_norm_[8].weight, layer4.2.bn3.indices_8, layer4.1.bn3.batch_norm_[8].running_mean, layer3.0.conv2.filter, conv1.weights, layer4.2.bn3.batch_norm_[8].bias, layer4.2.conv2.weights, layer4.1.bn2.batch_norm_[8].weight, layer2.2.bn1.batch_norm_[8].running_var, layer4.2.bn1.batch_norm_[8].weight, layer4.1.bn3.indices_8, layer4.0.conv2.filter, layer3.1.bn1.batch_norm_[8].running_var, layer1.2.bn2.batch_norm_[8].running_var, layer3.2.bn3.indices_8, layer3.5.bn1.batch_norm_[8].weight, layer3.1.conv2.filter, layer4.1.bn2.batch_norm_[8].running_var, layer3.2.bn2.batch_norm_[8].running_var, layer3.0.downsample.0.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer2.3.bn2.batch_norm_[8].running_mean, layer2.2.bn1.indices_8, layer1.2.bn3.batch_norm_[8].running_mean, layer4.2.bn2.indices_8, layer4.0.conv3.filter, layer3.5.bn2.batch_norm_[8].weight, layer3.3.bn1.batch_norm_[8].bias, layer1.2.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer1.2.bn1.indices_8, layer3.5.conv1.filter, layer3.4.conv3.weights, layer3.0.bn2.batch_norm_[8].running_mean, layer1.2.bn1.batch_norm_[8].running_mean, layer3.0.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer2.3.conv1.weights, layer2.2.conv3.filter, layer1.2.bn3.batch_norm_[8].weight, layer2.0.conv3.filter, layer3.5.conv3.weights, layer3.0.bn2.batch_norm_[8].running_var, layer4.0.downsample.1.batch_norm_[8].bias, layer4.2.bn2.batch_norm_[8].running_var, layer1.0.downsample.1.batch_norm_[8].bias, layer1.2.bn2.batch_norm_[8].bias, layer2.1.bn2.indices_8, layer1.2.bn3.indices_8, layer2.2.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer2.0.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer4.2.bn1.batch_norm_[8].running_mean, layer3.1.conv1.weights, layer3.1.bn1.batch_norm_[8].weight, layer3.3.bn1.batch_norm_[8].running_mean, layer2.0.bn2.batch_norm_[8].running_mean, layer1.0.bn3.batch_norm_[8].weight, layer4.1.bn1.batch_norm_[8].running_mean, layer3.1.bn2.batch_norm_[8].running_var, layer2.2.conv1.weights, layer2.0.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer3.1.bn3.batch_norm_[8].running_mean, layer2.2.bn1.batch_norm_[8].weight, layer3.1.bn3.batch_norm_[8].weight, layer1.0.downsample.1.batch_norm_[8].weight, layer3.3.bn1.batch_norm_[8].weight, layer2.2.conv2.filter, layer3.0.bn3.batch_norm_[8].running_mean, layer4.0.bn2.batch_norm_[8].running_mean, layer3.3.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer1.0.conv3.filter, layer2.2.bn3.batch_norm_[8].running_mean, layer1.0.conv3.weights, layer3.1.bn1.batch_norm_[8].running_mean, layer4.0.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer2.1.bn3.batch_norm_[8].running_var, layer4.0.bn3.batch_norm_[8].running_mean, layer2.0.downsample.1.batch_norm_[8].running_var, layer3.1.bn2.indices_8, layer3.1.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer4.2.bn3.batch_norm_[8].running_mean, layer3.0.bn3.batch_norm_[8].bias, layer3.4.bn3.indices_8, layer3.2.conv3.weights, layer1.2.bn1.batch_norm_[8].weight, layer4.0.downsample.0.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer4.0.bn3.indices_8, layer1.0.bn1.indices_8, layer3.4.conv2.filter, layer3.3.bn2.indices_8, layer1.0.downsample.1.batch_norm_[8].running_var, layer3.0.bn2.batch_norm_[8].weight, layer3.5.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer3.5.bn1.batch_norm_[8].running_mean, layer4.1.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer3.1.bn3.indices_8, layer4.0.downsample.0.filter, layer1.1.bn1.batch_norm_[8].running_var, layer2.3.bn1.indices_8, layer4.2.conv3.filter, layer2.0.bn1.batch_norm_[8].running_mean, layer4.1.conv2.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer2.0.downsample.1.batch_norm_[8].weight, layer2.1.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer3.5.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer2.2.bn3.indices_8, layer1.2.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer2.3.bn3.indices_8, bn1.indices_8, layer2.3.bn3.batch_norm_[8].bias, layer2.3.bn3.batch_norm_[8].running_mean, layer3.0.bn1.batch_norm_[8].running_mean, layer4.2.bn2.batch_norm_[8].weight, layer1.0.conv1.weights, conv1.filter, layer3.1.bn3.batch_norm_[8].running_var, layer3.4.bn1.batch_norm_[8].running_mean, layer2.0.bn3.indices_8, layer2.1.conv1.filter, layer2.3.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer4.0.bn1.batch_norm_[8].weight, layer1.0.bn3.indices_8, layer2.0.downsample.0.filter, layer2.1.conv2.weights, layer3.0.bn1.batch_norm_[8].weight, layer3.3.bn2.batch_norm_[8].running_var, layer3.1.conv2.weights, layer4.2.bn3.batch_norm_[8].weight, layer2.0.conv2.weights, layer2.2.bn1.batch_norm_[8].running_mean, layer3.0.conv2.weights, layer3.2.bn3.batch_norm_[8].running_var, layer2.1.bn3.indices_8, layer2.2.bn3.batch_norm_[8].bias, layer2.3.conv2.filter, layer4.0.bn1.batch_norm_[8].running_var, layer2.3.bn1.batch_norm_[8].running_var, layer2.2.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer1.1.bn3.indices_8, layer3.3.bn3.batch_norm_[8].running_mean, layer2.2.bn2.indices_8, layer1.2.conv1.filter, layer1.0.bn2.batch_norm_[8].bias, layer2.0.conv3.weights, layer3.3.bn3.batch_norm_[8].weight, layer3.3.conv2.weights, layer4.0.bn1.indices_8, layer1.0.conv2.filter, layer2.3.bn3.batch_norm_[8].weight, layer3.5.bn2.indices_8, layer2.2.conv1.filter, layer3.5.bn3.batch_norm_[8].bias, layer3.1.conv3.basisexpansion.block_expansion('regular', 'regular').sampled_basis, layer4.2.conv1.basisexpansion.block_expansion('regular', 'regular').sampled_basis

    Done (t=1.70s) creating index... loading annotations into memory... index created! Done (t=1.77s) creating index... index created! 2021-05-19 21:48:10,559 - INFO - Start running, host: neo@neo, work_dir: /home/neo/desktop/ReDet/work_dirs/ReDet_re50_refpn_1x_dota1 2021-05-19 21:48:10,559 - INFO - workflow: [('train', 1)], max: 12 epochs Traceback (most recent call last): File "./tools/train.py", line 95, in main() File "./tools/train.py", line 91, in main logger=logger) File "/home/neo/desktop/ReDet/mmdet/apis/train.py", line 59, in train_detector _dist_train(model, dataset, cfg, validate=validate) File "/home/neo/desktop/ReDet/mmdet/apis/train.py", line 171, in _dist_train runner.run(data_loaders, cfg.workflow, cfg.total_epochs) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/mmcv/runner/runner.py", line 358, in run epoch_runner(data_loaders[i], **kwargs) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/mmcv/runner/runner.py", line 255, in train self.model.train() File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1064, in train module.train(mode) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1064, in train module.train(mode) File "/home/neo/desktop/ReDet/mmdet/models/backbones/re_resnet.py", line 727, in train self._freeze_stages() File "/home/neo/desktop/ReDet/mmdet/models/backbones/re_resnet.py", line 693, in _freeze_stages m.eval() File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1080, in eval return self.train(False) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1064, in train module.train(mode) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1064, in train module.train(mode) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/e2cnn-0.1.7-py3.7.egg/e2cnn/nn/modules/r2_conv/r2convolution.py", line 386, in train _filter, _bias = self.expand_parameters() File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/e2cnn-0.1.7-py3.7.egg/e2cnn/nn/modules/r2_conv/r2convolution.py", line 303, in expand_parameters _filter = self.basisexpansion(self.weights) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, **kwargs) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/e2cnn-0.1.7-py3.7.egg/e2cnn/nn/modules/r2_conv/basisexpansion_blocks.py", line 334, in forward _filter = self._expand_block(weights, io_pair).reshape(out_indices[2], in_indices[2], self.S) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/e2cnn-0.1.7-py3.7.egg/e2cnn/nn/modules/r2_conv/basisexpansion_blocks.py", line 301, in _expand_block _filter = block_expansion(coefficients) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, **kwargs) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/e2cnn-0.1.7-py3.7.egg/e2cnn/nn/modules/r2_conv/basisexpansion_singleblock.py", line 99, in forward return torch.einsum('boi...,kb->koi...', self.sampled_basis, weights) #.transpose(1, 2).contiguous() File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/torch/functional.py", line 201, in einsum return torch._C._VariableFunctions.einsum(equation, operands) RuntimeError: cublas runtime error : the GPU program failed to execute at /opt/conda/conda-bld/pytorch_1573049310284/work/aten/src/THC/THCBlas.cu:331 Traceback (most recent call last): File "./tools/train.py", line 95, in main() File "./tools/train.py", line 91, in main logger=logger) File "/home/neo/desktop/ReDet/mmdet/apis/train.py", line 59, in train_detector _dist_train(model, dataset, cfg, validate=validate) File "/home/neo/desktop/ReDet/mmdet/apis/train.py", line 171, in _dist_train runner.run(data_loaders, cfg.workflow, cfg.total_epochs) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/mmcv/runner/runner.py", line 358, in run epoch_runner(data_loaders[i], **kwargs) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/mmcv/runner/runner.py", line 255, in train self.model.train() File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1064, in train module.train(mode) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1064, in train module.train(mode) File "/home/neo/desktop/ReDet/mmdet/models/backbones/re_resnet.py", line 727, in train self._freeze_stages() File "/home/neo/desktop/ReDet/mmdet/models/backbones/re_resnet.py", line 693, in _freeze_stages m.eval() File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1080, in eval return self.train(False) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1064, in train module.train(mode) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1064, in train module.train(mode) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/e2cnn-0.1.7-py3.7.egg/e2cnn/nn/modules/r2_conv/r2convolution.py", line 386, in train _filter, _bias = self.expand_parameters() File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/e2cnn-0.1.7-py3.7.egg/e2cnn/nn/modules/r2_conv/r2convolution.py", line 303, in expand_parameters _filter = self.basisexpansion(self.weights) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, **kwargs) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/e2cnn-0.1.7-py3.7.egg/e2cnn/nn/modules/r2_conv/basisexpansion_blocks.py", line 334, in forward _filter = self._expand_block(weights, io_pair).reshape(out_indices[2], in_indices[2], self.S) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/e2cnn-0.1.7-py3.7.egg/e2cnn/nn/modules/r2_conv/basisexpansion_blocks.py", line 301, in _expand_block _filter = block_expansion(coefficients) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, **kwargs) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/e2cnn-0.1.7-py3.7.egg/e2cnn/nn/modules/r2_conv/basisexpansion_singleblock.py", line 99, in forward return torch.einsum('boi...,kb->koi...', self.sampled_basis, weights) #.transpose(1, 2).contiguous() File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/torch/functional.py", line 201, in einsum return torch._C._VariableFunctions.einsum(equation, operands) RuntimeError: cublas runtime error : the GPU program failed to execute at /opt/conda/conda-bld/pytorch_1573049310284/work/aten/src/THC/THCBlas.cu:331 Traceback (most recent call last): File "/home/neo/anaconda3/envs/redet/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/neo/anaconda3/envs/redet/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/torch/distributed/launch.py", line 253, in main() File "/home/neo/anaconda3/envs/redet/lib/python3.7/site-packages/torch/distributed/launch.py", line 249, in main cmd=cmd) subprocess.CalledProcessError: Command '['/home/neo/anaconda3/envs/redet/bin/python', '-u', './tools/train.py', '--local_rank=1', 'configs/ReDet/ReDet_re50_refpn_1x_dota1.py', '--launcher', 'pytorch']' returned non-zero exit status 1. `

    Hi, csuhan! I run this algorithm with RTX3080*2,the env is as follows:

    _libgcc_mutex 0.1 conda_forge https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge _openmp_mutex 4.5 1_gnu https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge addict 2.4.0 pypi_0 pypi blas 1.0 mkl https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main bzip2 1.0.8 h7f98852_4 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge ca-certificates 2020.12.5 ha878542_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge certifi 2020.12.5 py37h89c1867_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge chardet 4.0.0 pypi_0 pypi cudatoolkit 11.1.1 h6406543_8 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge cycler 0.10.0 pypi_0 pypi cython 0.29.23 py37hcd2ae1e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge e2cnn 0.1.7 pypi_0 pypi ffmpeg 4.3 hf484d3e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch freetype 2.10.4 h0708190_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge gmp 6.2.1 h58526e2_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge gnutls 3.6.13 h85f3911_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge idna 2.10 pypi_0 pypi intel-openmp 2021.2.0 h06a4308_610 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main jpeg 9b h024ee3a_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main kiwisolver 1.3.1 pypi_0 pypi lame 3.100 h7f98852_1001 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge lcms2 2.12 h3be6417_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main ld_impl_linux-64 2.35.1 hea4e1c9_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge libffi 3.3 h58526e2_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge libgcc-ng 9.3.0 h2828fa1_19 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge libgomp 9.3.0 h2828fa1_19 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge libiconv 1.16 h516909a_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge libpng 1.6.37 h21135ba_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge libstdcxx-ng 9.3.0 h6de172a_19 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge libtiff 4.1.0 h2733197_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main libuv 1.41.0 h7f98852_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge lz4-c 1.9.3 h9c3ff4c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge matplotlib 3.4.2 pypi_0 pypi mkl 2021.2.0 h06a4308_296 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main mkl-service 2.3.0 py37h27cfd23_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main mkl_fft 1.3.0 py37h42c9631_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main mkl_random 1.2.1 py37ha9443f7_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main mmcv 0.2.13 pypi_0 pypi mmdet 0.6.0+unknown dev_0 ncurses 6.2 h58526e2_4 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge nettle 3.6 he412f7d_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge ninja 1.10.2 h4bd325d_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge numpy 1.20.1 py37h93e21f0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main numpy-base 1.20.1 py37h7d8b39e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main olefile 0.46 pyh9f0ad1d_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge opencv-python 4.5.2.52 pypi_0 pypi openh264 2.1.1 h780b84a_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge openssl 1.1.1k h7f98852_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge pillow 6.2.2 pypi_0 pypi pip 21.1.1 pyhd8ed1ab_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge pycocotools 2.0.2 pypi_0 pypi pyparsing 2.4.7 pypi_0 pypi python 3.7.10 hffdb5ce_100_cpython https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge python-dateutil 2.8.1 pypi_0 pypi python_abi 3.7 1_cp37m https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge pytorch 1.8.0 py3.7_cuda11.1_cudnn8.0.5_0 pyyaml 5.4.1 pypi_0 pypi readline 8.1 h46c0cb4_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge requests 2.25.1 pypi_0 pypi scipy 1.6.3 pypi_0 pypi setuptools 49.6.0 py37h89c1867_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge shapely 1.7.1 pypi_0 pypi six 1.16.0 pyh6c4a22f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge sqlite 3.35.5 h74cdb3f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge terminaltables 3.1.0 pypi_0 pypi tk 8.6.10 h21135ba_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge torchvision 0.9.0 py37_cu111 tqdm 4.60.0 pypi_0 pypi typing_extensions 3.7.4.3 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge urllib3 1.26.4 pypi_0 pypi wheel 0.36.2 pyhd3deb0d_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge xz 5.2.5 h516909a_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge zlib 1.2.11 h516909a_1010 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge zstd 1.4.9 ha95c52a_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge


    **### The questions are: 1.I used the pretrain pth,but got "unexpected key in source state_dict: backbone.conv1.weights, backbone.conv1.basisexpansion.block_expansion('irrep_0', 'regular').sampled_basis, backbone.bn1.indices_8, ", how can I fix this? 2.When I input"python tools/train.py /home/neo/desktop/ReDet-master/configs/ReDet/ReDet_re50_refpn_1x_dota1.py", all worked correctly! But when I input"CUDA_VISIBLE_DEVICES=1,2 ./tools/dist_train.sh configs/ReDet/ReDet_re50_refpn_1x_dota1.py 2", I got "RuntimeError: cublas runtime error : the GPU program failed".

    Any suggestions would be appreciative.**

    opened by volareneo 6
  • the number of iterations of each epoch during training

    the number of iterations of each epoch during training

    I run it using the command:

    CUDA_VISIBLE_DEVICES=1,2,3,4 ./tools/dist_train.sh configs/ReDet/ReDet_re50_refpn_1x_dota1.py 4

    I used 4 gpus, and batchsize is 8. I found that the number of iterations in each epoch is different from your log file. Mine is 1695, but yours is 1550. Why? I did not make any changes to the DOTA dataset other than run "prepare_dota1.py". Looking forward to your reply.

    opened by ChujieXu 6
  • AssertionError: annotation file format <class 'list'> not supported

    AssertionError: annotation file format not supported

    Hai! Could someone please tell me how to solve this error. I am trying to train ReDet on HRSC2016 dataset. This is the error I am getting.

    ReResNet Orientation: 8 Fix Params: False 2021-08-04 14:12:24,505 - INFO - Distributed training: False 2021-08-04 14:13:14,962 - INFO - load model from: work_dirs/ReResNet_pretrain/re_resnet50_c8_batch256-25b16846.pth 2021-08-04 14:13:14,989 - WARNING - The model and loaded state dict do not match exactly

    unexpected key in source state_dict: head.fc.weight, head.fc.bias

    missing keys in source state_dict: layer3.5.conv3.filter, layer4.2.conv3.filter, layer4.2.conv1.filter, layer2.3.conv2.filter, layer1.1.conv3.filter, layer4.1.conv3.filter, layer3.2.conv1.filter, layer2.1.conv2.filter, layer3.4.conv1.filter, layer3.3.conv2.filter, layer1.0.conv2.filter, layer1.2.conv3.filter, layer2.0.downsample.0.filter, layer3.0.conv1.filter, layer3.0.conv2.filter, layer3.1.conv3.filter, layer2.2.conv2.filter, layer3.4.conv2.filter, layer4.0.conv1.filter, layer2.2.conv3.filter, layer1.0.downsample.0.filter, layer4.0.conv3.filter, layer4.2.conv2.filter, layer2.0.conv3.filter, layer3.2.conv2.filter, layer3.3.conv3.filter, layer4.0.conv2.filter, layer3.4.conv3.filter, layer4.1.conv2.filter, layer3.1.conv1.filter, layer2.1.conv3.filter, layer2.1.conv1.filter, layer1.0.conv1.filter, layer2.0.conv2.filter, layer1.1.conv1.filter, layer1.2.conv2.filter, layer2.3.conv3.filter, layer4.0.downsample.0.filter, layer3.5.conv2.filter, layer3.5.conv1.filter, layer2.3.conv1.filter, layer2.0.conv1.filter, conv1.filter, layer1.0.conv3.filter, layer1.1.conv2.filter, layer3.0.downsample.0.filter, layer3.0.conv3.filter, layer4.1.conv1.filter, layer1.2.conv1.filter, layer3.2.conv3.filter, layer3.3.conv1.filter, layer3.1.conv2.filter, layer2.2.conv1.filter

    loading annotations into memory... Traceback (most recent call last): File "tools/train.py", line 95, in main() File "tools/train.py", line 75, in main train_dataset = get_dataset(cfg.data.train) File "/home/ai20resch13001/ReDet/mmdet/datasets/utils.py", line 109, in get_dataset dset = obj_from_dict(data_info, datasets) File "/DATA/ai20resch13001/anaconda3/envs/redet2/lib/python3.7/site-packages/mmcv/runner/utils.py", line 78, in obj_from_dict return obj_type(**args) File "/home/ai20resch13001/ReDet/mmdet/datasets/custom.py", line 68, in init self.img_infos = self.load_annotations(ann_file) File "/home/ai20resch13001/ReDet/mmdet/datasets/coco.py", line 25, in load_annotations self.coco = COCO(ann_file) File "/DATA/ai20resch13001/anaconda3/envs/redet2/lib/python3.7/site-packages/pycocotools/coco.py", line 86, in init assert type(dataset)==dict, 'annotation file format {} not supported'.format(type(dataset)) AssertionError: annotation file format <class 'list'> not supported

    opened by Sairam13001 5
  • RuntimeError: CUDA error: no kernel image is available for execution on the device

    RuntimeError: CUDA error: no kernel image is available for execution on the device

    Thanks for your work, csuhan. I try to use your code on my server with 3090, which only support cuda 11. So I follow the instruction of https://github.com/csuhan/ReDet/issues/1 with pytorch 1.7.0 and cuda 11.0. But when I run the bash compile.sh, I met a problem as follow: /usr/local/cuda-11.0/bin/nvcc -I/home/wangqx/anaconda3/envs/redet_torch17_py38/lib/python3.8/site-packages/torch/include -I/home/wangqx/anaconda3/envs/redet_torch17_py38/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/wangqx/anaconda3/envs/redet_torch17_py38/lib/python3.8/site-packages/torch/include/TH -I/home/wangqx/anaconda3/envs/redet_torch17_py38/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.0/include -I/home/wangqx/anaconda3/envs/redet_torch17_py38/include/python3.8 -c -c /home/wangqx/airplane_detection_qx/ReDet_torch17_py38/mmdet/ops/roi_pool/src/roi_pool_kernel.cu -o /home/wangqx/airplane_detection_qx/ReDet_torch17_py38/mmdet/ops/roi_pool/build/temp.linux-x86_64-3.8/src/roi_pool_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=roi_pool_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=sm_86 -std=c++14 nvcc fatal : Unsupported gpu architecture 'compute_86'

    Then I use the command TORCH_CUDA_ARCH_LIST=7.0 bash compile.sh to solve this problem. But the code still can not work and raise this this error:

    File "/home/wangqx/airplane_detection_qx/ReDet_torch17_py38/mmdet/apis/inference.py", line 66, in inference_detector return _inference_single(model, imgs, img_transform, device) File "/home/wangqx/airplane_detection_qx/ReDet_torch17_py38/mmdet/apis/inference.py", line 93, in _inference_single result = model(return_loss=False, rescale=True, **data) File "/home/wangqx/anaconda3/envs/redet_torch17_py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/wangqx/airplane_detection_qx/ReDet_torch17_py38/mmdet/models/detectors/base_new.py", line 97, in forward return self.forward_test(img, img_meta, **kwargs) File "/home/wangqx/airplane_detection_qx/ReDet_torch17_py38/mmdet/models/detectors/base_new.py", line 86, in forward_test return self.simple_test(imgs[0], img_metas[0], **kwargs) File "/home/wangqx/airplane_detection_qx/ReDet_torch17_py38/mmdet/models/detectors/ReDet.py", line 239, in simple_test proposal_list = self.simple_test_rpn( File "/home/wangqx/airplane_detection_qx/ReDet_torch17_py38/mmdet/models/detectors/test_mixins.py", line 12, in simple_test_rpn proposal_list = self.rpn_head.get_bboxes(*proposal_inputs) File "/home/wangqx/airplane_detection_qx/ReDet_torch17_py38/mmdet/models/anchor_heads/anchor_head.py", line 216, in get_bboxes proposals = self.get_bboxes_single(cls_score_list, bbox_pred_list, File "/home/wangqx/airplane_detection_qx/ReDet_torch17_py38/mmdet/models/anchor_heads/rpn_head.py", line 92, in get_bboxes_single proposals, _ = nms(proposals, cfg.nms_thr) File "/home/wangqx/airplane_detection_qx/ReDet_torch17_py38/mmdet/ops/nms/nms_wrapper.py", line 49, in nms inds = nms_cuda.nms(dets_th, iou_thr) RuntimeError: CUDA error: no kernel image is available for execution on the device

    opened by qixiong-wang 5
  • dota数据集结果解析

    dota数据集结果解析

    1. dota数据集在进行Task1_results_nms结果解析时报错,请问您知道怎么解决吗?
    2. Task1_results指的是原图下的各个裁减子图之间结果直接加起来, Task1_results_nms则是要对原图内各框再进行一次nms, 不知道我理解的对吗? 希望得到您的解答! Traceback (most recent call last): File "tools/parse_results.py", line 132, in parse_results(config_file, pkl_file, output_path, type) File "tools/parse_results.py", line 104, in parse_results os.path.join(dstpath, 'Task1_results_nms'), nms_type=r'py_cpu_nms_poly_fast', o_thresh=current_thresh) File "/media/meng1/disk1/Zhangc/oriented_det/ReDet-master/DOTA_devkit/ResultMerge_multi_process.py", line 235, in mergebypoly_multiprocess py_cpu_nms_poly_fast, o_thresh) File "/media/meng1/disk1/Zhangc/oriented_det/ReDet-master/DOTA_devkit/ResultMerge_multi_process.py", line 213, in mergebase_parallel pool.map(mergesingle_fn, filelist) File "/home/meng1/anaconda3/envs/redet3.6/lib/python3.6/multiprocessing/pool.py", line 266, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "/home/meng1/anaconda3/envs/redet3.6/lib/python3.6/multiprocessing/pool.py", line 644, in get raise self._value IndexError: list index out of range
    opened by zhangcongzc 4
  • PREPARE DOTA 1.0 DATASET

    PREPARE DOTA 1.0 DATASET

    I downloaded the DOTA 1.0 dataset and there are 2812 training images in it.

    I used the DOTA_devkit/prepare_dota1.py for converting the data to COCO format.

    After conversion, there are 21406 images.

    Is this how it works? Or am I doing something wrong?

    Please let me know.

    Thank you.

    opened by Sairam13001 4
  • calculate the map

    calculate the map

    1. I think the code combines train and val into trainval for training. What code will you run to get the evaluation index map? If you run test. py -- eval bbox, but the test set has no tag file, it will be meaningless if you use the trainval set.
    2. If you want to know the data set DOTA, you can get the results in the paper 3.How to partition the DOTA dataset to get the results in the paper? Use test or trainval to calculate the map. To calculate the map use tools/test.Py -- eval bbox or DOTA_ devkit/dota_ evaluation_ task1.py
    opened by LeiaJ520 0
  • Test error

    Test error

    使用DOTA数据集运行test.py时,出现这样的问题: cannot import name 'results2json' from 'mmdet.core'(/home/public_anaconda/envs/pytorch/lib/python3.7/site-packages/mmdet/core/init.py)

    opened by LeiaJ520 0
  • ninja: error: build.ninja:3: lexing error

    ninja: error: build.ninja:3: lexing error

    ase\build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: error: build.ninja:3: lexing error

    Traceback (most recent call last): File "E:\anaconda3_python38\envs\mmdet\lib\site-packages\torch\utils\cpp_extension.py", line 1667, in _run_ninja_build subprocess.run( File "E:\anaconda3_python38\envs\mmdet\lib\subprocess.py", line 516, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. I want to know how to solve this problem

    opened by yuan-kai-design 0
  • The training failed even though the accuracy up to 99.7 and the loss is 0.2

    The training failed even though the accuracy up to 99.7 and the loss is 0.2

    When I train the ReDet model with some RS images, the result during training is as shown below: 2022-08-0302:56:57,351-INFO- Epoch [48][40/61] lr :0.00629, eta :2:45:00, time :2.765, datatime :0.018, memory :3801, loss_rpn_cls :0.0318, loss_ rpn bbox:0.0034,s0.rbbox_ loss _ cls :0.0229,s0.rbbox_ acc :99.5605, s0.rbbox_loss_ bbox :0.0314, s1.rbbox _ loss _ cls :0.0117, s1 . rbbox _ acc :99.7456, s1. rbbox_loss_ bbox :0.0019, loss:0.1032

    But when I test the model trained, the result is: recall = 0.9583333333333334, presicion = 0.025669642857142856, map50 = 0.5672377341984399

    I can't find the reason for this.

    opened by Geo-Chou 1
  • The slow speed of test

    The slow speed of test

    Thanks for your novel work ! I have trained the ReDet on my dataset, but i find it's a little slow when i test. I have convert the ReDet to pytorch and set the rotate_test_aug to False, it reached anout 1.4 FPS on a 2080Ti with 1024X1024 image size, is there someting wrong? Or it is normal?

    opened by chongkuiqi 0
Official code for the paper: Deep Graph Matching under Quadratic Constraint (CVPR 2021)

QC-DGM This is the official PyTorch implementation and models for our CVPR 2021 paper: Deep Graph Matching under Quadratic Constraint. It also contain

Quankai Gao 55 Nov 14, 2022
Official code for the CVPR 2021 paper "How Well Do Self-Supervised Models Transfer?"

How Well Do Self-Supervised Models Transfer? This repository hosts the code for the experiments in the CVPR 2021 paper How Well Do Self-Supervised Mod

Linus Ericsson 157 Dec 16, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

selfcontact This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] It includes the main function

Lea Müller 68 Dec 6, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

SMPLify-XMC This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright Lic

Lea Müller 83 Dec 14, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

TUCH This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright License fo

Lea Müller 45 Jan 7, 2023
[CVPR 2022] CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation

CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation Prerequisite Please create and activate the following conda envrionment. To r

Qin Wang 87 Jan 8, 2023
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Jie Liu 111 Dec 31, 2022
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

HRNet 367 Dec 27, 2022
The official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach

Graph Optimizer This repo contains the official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averagin

Chenyu 109 Dec 23, 2022
Official project website for the CVPR 2021 paper "Exploring intermediate representation for monocular vehicle pose estimation"

EgoNet Official project website for the CVPR 2021 paper "Exploring intermediate representation for monocular vehicle pose estimation". This repo inclu

Shichao Li 138 Dec 9, 2022
Official Implement of CVPR 2021 paper “Cross-Modal Collaborative Representation Learning and a Large-Scale RGBT Benchmark for Crowd Counting”

RGBT Crowd Counting Lingbo Liu, Jiaqi Chen, Hefeng Wu, Guanbin Li, Chenglong Li, Liang Lin. "Cross-Modal Collaborative Representation Learning and a L

null 37 Dec 8, 2022
Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Zhengxia Zou 1.5k Dec 28, 2022
Official repository for the CVPR 2021 paper "Learning Feature Aggregation for Deep 3D Morphable Models"

Deep3DMM Official repository for the CVPR 2021 paper Learning Feature Aggregation for Deep 3D Morphable Models. Requirements This code is tested on Py

null 38 Dec 27, 2022
The official repo of the CVPR 2021 paper Group Collaborative Learning for Co-Salient Object Detection .

GCoNet The official repo of the CVPR 2021 paper Group Collaborative Learning for Co-Salient Object Detection . Trained model Download final_gconet.pth

Qi Fan 46 Nov 17, 2022
The official implementation of CVPR 2021 Paper: Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation.

Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation This repository is the official implementation of CVPR 2021 paper:

null 9 Nov 14, 2022
The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter

FAPIS The official implementation of the CVPR 2021 paper FAPIS: a Few-shot Anchor-free Part-based Instance Segmenter Introduction This repo is primari

Khoi Nguyen 8 Dec 11, 2022
Official implementation for CVPR 2021 paper: Adaptive Class Suppression Loss for Long-Tail Object Detection

Adaptive Class Suppression Loss for Long-Tail Object Detection This repo is the official implementation for CVPR 2021 paper: Adaptive Class Suppressio

CASIA-IVA-Lab 67 Dec 4, 2022
[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

Ayan Kumar Bhunia 44 Dec 12, 2022
Official PyTorch Code of GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection (CVPR 2021)

GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Mo

Abhinav Kumar 76 Jan 2, 2023