Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)

Overview

RTM3D-PyTorch

python-image pytorch-image

The PyTorch Implementation of the paper: RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving (ECCV 2020)


Demonstration

demo

Features

  • Realtime 3D object detection based on a monocular RGB image
  • Support distributed data parallel training
  • Tensorboard
  • ResNet-based Keypoint Feature Pyramid Network (KFPN) (Using by setting --arch fpn_resnet_18)
  • Use images from both left and right cameras (Control by setting the use_left_cam_prob argument)
  • Release pre-trained models

Some modifications from the paper

  • Formula (3):

    • A negative value can't be an input of the log operator, so please don't normalize dim as mentioned in the paper because the normalized dim values maybe less than 0. Hence I've directly regressed to absolute dimension values in meters.
    • Use L1 loss for depth estimation (applying the sigmoid activation to the depth output first).
  • Formula (5): I haven't taken the absolute values of the ground-truth, I have used the relative values instead. The code is here

  • Formula (7): argmin instead of argmax

  • Generate heatmap for the center and vertexes of objects as the CenterNet paper. If you want to use the strategy from RTM3D paper, you can pass the dynamic-sigma argument to the train.py script.

2. Getting Started

2.1. Requirement

pip install -U -r requirements.txt

2.2. Data Preparation

Download the 3D KITTI detection dataset from here.

The downloaded data includes:

  • Training labels of object data set (5 MB)
  • Camera calibration matrices of object data set (16 MB)
  • Left color images of object data set (12 GB)
  • Right color images of object data set (12 GB)

Please make sure that you construct the source code & dataset directories structure as below.

2.3. RTM3D architecture

architecture

The model takes only the RGB images as the input and outputs the main center heatmap, vertexes heatmap, and vertexes coordinate as the base module to estimate 3D bounding box.

2.4. How to run

2.4.1. Visualize the dataset

cd src/data_process
  • To visualize camera images with 3D boxes, let's execute:
python kitti_dataset.py

Then Press n to see the next sample >>> Press Esc to quit...

2.4.2. Inference

Download the trained model from here (will be released), then put it to ${ROOT}/checkpoints/ and execute:

python test.py --gpu_idx 0 --arch resnet_18 --pretrained_path ../checkpoints/rtm3d_resnet_18.pth

2.4.3. Evaluation

python evaluate.py --gpu_idx 0 --arch resnet_18 --pretrained_path <PATH>

2.4.4. Training

2.4.4.1. Single machine, single gpu
python train.py --gpu_idx 0 --arch <ARCH> --batch_size <N> --num_workers <N>...
2.4.4.2. Multi-processing Distributed Data Parallel Training

We should always use the nccl backend for multi-processing distributed training since it currently provides the best distributed training performance.

  • Single machine (node), multiple GPUs
python train.py --dist-url 'tcp://127.0.0.1:29500' --dist-backend 'nccl' --multiprocessing-distributed --world-size 1 --rank 0
  • Two machines (two nodes), multiple GPUs

First machine

python train.py --dist-url 'tcp://IP_OF_NODE1:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 2 --rank 0

Second machine

python train.py --dist-url 'tcp://IP_OF_NODE2:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 2 --rank 1

To reproduce the results, you can run the bash shell script

./train.sh

Tensorboard

  • To track the training progress, go to the logs/ folder and
cd logs/<saved_fn>/tensorboard/
tensorboard --logdir=./

Contact

If you think this work is useful, please give me a star!
If you find any errors or have any suggestions, please contact me (Email: [email protected]).
Thank you!

Citation

@article{RTM3D,
  author = {Peixuan Li,  Huaici Zhao, Pengfei Liu, Feidao Cao},
  title = {RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving},
  year = {2020},
  conference = {ECCV 2020},
}
@misc{RTM3D-PyTorch,
  author =       {Nguyen Mau Dung},
  title =        {{RTM3D-PyTorch: PyTorch Implementation of the RTM3D paper}},
  howpublished = {\url{https://github.com/maudzung/RTM3D-PyTorch}},
  year =         {2020}
}

References

[1] CenterNet: Objects as Points paper, PyTorch Implementation

Folder structure

${ROOT}
└── checkpoints/    
    ├── rtm3d_resnet_18.pth
    ├── rtm3d_fpn_resnet_18.pth
└── dataset/    
    └── kitti/
        ├──ImageSets/
        │   ├── test.txt
        │   ├── train.txt
        │   └── val.txt
        ├── training/
        │   ├── image_2/ (left color camera)
        │   ├── image_3/ (right color camera)
        │   ├── calib/
        │   ├── label_2/
        └── testing/  
        │   ├── image_2/ (left color camera)
        │   ├── image_3/ (right color camera)
        │   ├── calib/
        └── classes_names.txt
└── src/
    ├── config/
    │   ├── train_config.py
    │   └── kitti_config.py
    ├── data_process/
    │   ├── kitti_dataloader.py
    │   ├── kitti_dataset.py
    │   └── kitti_data_utils.py
    ├── models/
    │   ├── fpn_resnet.py
    │   ├── resnet.py
    │   ├── model_utils.py
    └── utils/
    │   ├── evaluation_utils.py
    │   ├── logger.py
    │   ├── misc.py
    │   ├── torch_utils.py
    │   ├── train_utils.py
    ├── evaluate.py
    ├── test.py
    ├── train.py
    └── train.sh
├── README.md 
└── requirements.txt

Usage

usage: train.py [-h] [--seed SEED] [--saved_fn FN] [--root-dir PATH]
                [--arch ARCH] [--pretrained_path PATH] [--head_conv HEAD_CONV]
                [--hflip_prob HFLIP_PROB]
                [--use_left_cam_prob USE_LEFT_CAM_PROB] [--dynamic-sigma]
                [--no-val] [--num_samples NUM_SAMPLES]
                [--num_workers NUM_WORKERS] [--batch_size BATCH_SIZE]
                [--print_freq N] [--tensorboard_freq N] [--checkpoint_freq N]
                [--start_epoch N] [--num_epochs N] [--lr_type LR_TYPE]
                [--lr LR] [--minimum_lr MIN_LR] [--momentum M] [-wd WD]
                [--optimizer_type OPTIMIZER] [--steps [STEPS [STEPS ...]]]
                [--world-size N] [--rank N] [--dist-url DIST_URL]
                [--dist-backend DIST_BACKEND] [--gpu_idx GPU_IDX] [--no_cuda]
                [--multiprocessing-distributed] [--evaluate]
                [--resume_path PATH] [--K K]

The Implementation of RTM3D using PyTorch

optional arguments:
  -h, --help            show this help message and exit
  --seed SEED           re-produce the results with seed random
  --saved_fn FN         The name using for saving logs, models,...
  --root-dir PATH       The ROOT working directory
  --arch ARCH           The name of the model architecture
  --pretrained_path PATH
                        the path of the pretrained checkpoint
  --head_conv HEAD_CONV
                        conv layer channels for output head0 for no conv
                        layer-1 for default setting: 64 for resnets and 256
                        for dla.
  --hflip_prob HFLIP_PROB
                        The probability of horizontal flip
  --use_left_cam_prob USE_LEFT_CAM_PROB
                        The probability of using the left camera
  --dynamic-sigma       If true, compute sigma based on Amax, Amin then
                        generate heamapIf false, compute radius as CenterNet
                        did
  --no-val              If true, dont evaluate the model on the val set
  --num_samples NUM_SAMPLES
                        Take a subset of the dataset to run and debug
  --num_workers NUM_WORKERS
                        Number of threads for loading data
  --batch_size BATCH_SIZE
                        mini-batch size (default: 16), this is the totalbatch
                        size of all GPUs on the current node when usingData
                        Parallel or Distributed Data Parallel
  --print_freq N        print frequency (default: 50)
  --tensorboard_freq N  frequency of saving tensorboard (default: 50)
  --checkpoint_freq N   frequency of saving checkpoints (default: 5)
  --start_epoch N       the starting epoch
  --num_epochs N        number of total epochs to run
  --lr_type LR_TYPE     the type of learning rate scheduler (cosin or
                        multi_step)
  --lr LR               initial learning rate
  --minimum_lr MIN_LR   minimum learning rate during training
  --momentum M          momentum
  -wd WD, --weight_decay WD
                        weight decay (default: 1e-6)
  --optimizer_type OPTIMIZER
                        the type of optimizer, it can be sgd or adam
  --steps [STEPS [STEPS ...]]
                        number of burn in step
  --world-size N        number of nodes for distributed training
  --rank N              node rank for distributed training
  --dist-url DIST_URL   url used to set up distributed training
  --dist-backend DIST_BACKEND
                        distributed backend
  --gpu_idx GPU_IDX     GPU index to use.
  --no_cuda             If true, cuda is not used.
  --multiprocessing-distributed
                        Use multi-processing distributed training to launch N
                        processes per node, which has N GPUs. This is the
                        fastest way to use PyTorch for either single node or
                        multi node data parallel training
  --evaluate            only evaluate the model, not training
  --resume_path PATH    the path of the resumed checkpoint
  --K K                 the number of top K
Comments
  • Nan loss occurs while training

    Nan loss occurs while training

    There is a little bug in src/losses/losses.py 52 & 53,which may cause a nan loss while training. I think a epsilon like 1e-10 should be added for avoiding log(0).

    opened by lyp0413 2
  • cuda run time error when set gpu_idx from 0 to other numbers

    cuda run time error when set gpu_idx from 0 to other numbers

    If I change the gpu_idx from 0 to other numbers (for example, set gpu idx 5) in train.sh, there is an error that is shown below. If I set the gpu_idx 0, the script can run normally. I am confused with this bug.

    THCudaCheck FAIL file=/pytorch/torch/csrc/cuda/Module.cpp line=59 error=101 : invalid device ordinal Traceback (most recent call last): torch._C._cuda_setDevice(device) RuntimeError: cuda runtime error (101) : invalid device ordinal at /pytorch/torch/csrc/cuda/Module.cpp:59

    opened by hhhharold 2
  • Loss nan

    Loss nan

    Thanks for the great reproduction! I got the nan loss about half of the total number of epoch. Have you met this before?

    Epoch: [127/300] 2020-08-17 21:48:47,912: logger.py - info(), at Line 49:INFO: Train - Epoch: [127/300][ 35/464] Time 0.738 ( 0.738) Data 0.000 ( 0.014) Loss 2.1274e+00 (1.5406e+00) 2020-08-17 21:49:24,172: logger.py - info(), at Line 49:INFO: Train - Epoch: [127/300][ 85/464] Time 0.730 ( 0.730) Data 0.000 ( 0.006) Loss 1.2996e+00 (1.5485e+00) 2020-08-17 21:50:00,407: logger.py - info(), at Line 49:INFO: Train - Epoch: [127/300][135/464] Time 0.750 ( 0.728) Data 0.000 ( 0.004) Loss 2.1689e+00 (1.5462e+00) 2020-08-17 21:50:36,613: logger.py - info(), at Line 49:INFO: Train - Epoch: [127/300][185/464] Time 0.727 ( 0.727) Data 0.000 ( 0.003) Loss 1.5142e+00 (1.5394e+00) 2020-08-17 21:51:12,840: logger.py - info(), at Line 49:INFO: Train - Epoch: [127/300][235/464] Time 0.732 ( 0.727) Data 0.000 ( 0.002) Loss 1.5060e+00 (1.5328e+00) 2020-08-17 21:51:49,090: logger.py - info(), at Line 49:INFO: Train - Epoch: [127/300][285/464] Time 0.723 ( 0.726) Data 0.000 ( 0.002) Loss 1.4645e+00 (1.5327e+00) 2020-08-17 21:52:25,279: logger.py - info(), at Line 49:INFO: Train - Epoch: [127/300][335/464] Time 0.731 ( 0.726) Data 0.000 ( 0.002) Loss 1.5955e+00 (1.5390e+00) 2020-08-17 21:53:01,426: logger.py - info(), at Line 49:INFO: Train - Epoch: [127/300][385/464] Time 0.732 ( 0.726) Data 0.000 ( 0.002) Loss 1.5102e+00 (1.5416e+00) 2020-08-17 21:53:37,567: logger.py - info(), at Line 49:INFO: Train - Epoch: [127/300][435/464] Time 0.725 ( 0.725) Data 0.000 ( 0.001) Loss 1.6077e+00 (1.5410e+00) 2020-08-17 21:53:57,834: logger.py - info(), at Line 49:INFO: ---------------------------------------- 2020-08-17 21:53:57,834: logger.py - info(), at Line 49:INFO: =================================== 128/300 =================================== 2020-08-17 21:53:57,834: logger.py - info(), at Line 49:INFO: ---------------------------------------- 2020-08-17 21:53:57,834: logger.py - info(), at Line 49:INFO: Epoch: [128/300] 2020-08-17 21:54:14,399: logger.py - info(), at Line 49:INFO: Train - Epoch: [128/300][ 21/464] Time 0.730 ( 0.753) Data 0.000 ( 0.031) Loss 1.5503e+00 (1.5762e+00) 2020-08-17 21:54:50,617: logger.py - info(), at Line 49:INFO: Train - Epoch: [128/300][ 71/464] Time 0.734 ( 0.733) Data 0.000 ( 0.010) Loss 1.6927e+00 (1.5937e+00) 2020-08-17 21:55:26,919: logger.py - info(), at Line 49:INFO: Train - Epoch: [128/300][121/464] Time 0.737 ( 0.730) Data 0.000 ( 0.006) Loss 1.4969e+00 (1.5841e+00) 2020-08-17 21:56:03,156: logger.py - info(), at Line 49:INFO: Train - Epoch: [128/300][171/464] Time 0.736 ( 0.729) Data 0.000 ( 0.004) Loss 1.6823e+00 (1.5828e+00) 2020-08-17 21:56:39,368: logger.py - info(), at Line 49:INFO: Train - Epoch: [128/300][221/464] Time 0.728 ( 0.728) Data 0.000 ( 0.003) Loss 1.4616e+00 (1.5912e+00) 2020-08-17 21:57:15,445: logger.py - info(), at Line 49:INFO: Train - Epoch: [128/300][271/464] Time 0.725 ( 0.726) Data 0.000 ( 0.003) Loss nan (nan) 2020-08-17 21:57:51,119: logger.py - info(), at Line 49:INFO: Train - Epoch: [128/300][321/464] Time 0.711 ( 0.724) Data 0.000 ( 0.002) Loss nan (nan) 2020-08-17 21:58:26,710: logger.py - info(), at Line 49:INFO: Train - Epoch: [128/300][371/464] Time 0.725 ( 0.723) Data 0.000 ( 0.002) Loss nan (nan) 2020-08-17 21:59:02,274: logger.py - info(), at Line 49:INFO: Train - Epoch: [128/300][421/464] Time 0.711 ( 0.721) Data 0.000 ( 0.002) Loss nan (nan) 2020-08-17 21:59:32,180: logger.py - info(), at Line 49:INFO: ---------------------------------------- 2020-08-17 21:59:32,180: logger.py - info(), at Line 49:INFO: =================================== 129/300 =================================== 2020-08-17 21:59:32,180: logger.py - info(), at Line 49:INFO: ---------------------------------------- 2020-08-17 21:59:32,180: logger.py - info(), at Line 49:INFO: Epoch: [129/300] 2020-08-17 21:59:38,533: logger.py - info(), at Line 49:INFO: Train - Epoch: [129/300][ 7/464] Time 0.731 ( 0.794) Data 0.000 ( 0.088) Loss nan (nan) 2020-08-17 22:00:14,077: logger.py - info(), at Line 49:INFO: Train - Epoch: [129/300][ 57/464] Time 0.715 ( 0.722) Data 0.000 ( 0.012) Loss nan (nan) 2020-08-17 22:00:49,770: logger.py - info(), at Line 49:INFO: Train - Epoch: [129/300][107/464] Time 0.706 ( 0.718) Data 0.000 ( 0.007) Loss nan (nan) 2020-08-17 22:01:25,436: logger.py - info(), at Line 49:INFO: Train - Epoch: [129/300][157/464] Time 0.726 ( 0.717) Data 0.000 ( 0.005) Loss nan (nan) 2020-08-17 22:02:01,175: logger.py - info(), at Line 49:INFO: Train - Epoch: [129/300][207/464] Time 0.726 ( 0.716) Data 0.000 ( 0.004) Loss nan (nan) 2020-08-17 22:02:36,896: logger.py - info(), at Line 49:INFO: Train - Epoch: [129/300][257/464] Time 0.716 ( 0.716) Data 0.000 ( 0.003) Loss nan (nan) 2020-08-17 22:03:12,634: logger.py - info(), at Line 49:INFO: Train - Epoch: [129/300][307/464] Time 0.717 ( 0.716) Data 0.000 ( 0.003) Loss nan (nan) 2020-08-17 22:03:48,380: logger.py - info(), at Line 49:INFO: Train - Epoch: [129/300][357/464] Time 0.725 ( 0.716) Data 0.000 ( 0.002) Loss nan (nan) 2020-08-17 22:04:24,092: logger.py - info(), at Line 49:INFO: Train - Epoch: [129/300][407/464] Time 0.719 ( 0.715) Data 0.000 ( 0.002) Loss nan (nan) 2020-08-17 22:04:59,779: logger.py - info(), at Line 49:INFO: Train - Epoch: [129/300][457/464] Time 0.691 ( 0.715) Data 0.000 ( 0.002) Loss nan (nan) 2020-08-17 22:05:04,096: logger.py - info(), at Line 49:INFO:

    opened by lidehuihxjz 2
  • What the mean of

    What the mean of"Model_rtm3d_epoch_120.pth" and"Utils_rtm3d_epoch_120.pth"?

    Hello,I tarined the network successully, and the "Model_rtm3d_epoch_120.pth" and"Utils_rtm3d_epoch_120.pth" appear in "RTM3D/checkpoints/rtm3d",What do they represent?

    opened by hhcNWPU 1
  • Suggest to loosen the dependency on albumentations

    Suggest to loosen the dependency on albumentations

    Hi, your project RTM3D(commit id: 0bd3868a03f071244b2fed9ca1828298f5a96180) requires "albumentations==0.4.5" in its dependency. After analyzing the source code, we found that the following versions of albumentations can also be suitable, i.e., albumentations 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.4.4, since all functions that you directly (3 APIs: albumentations.augmentations.transforms.RandomBrightnessContrast.init, albumentations.augmentations.transforms.GaussNoise.init, albumentations.core.composition.Compose.init) or indirectly (propagate to 12 albumentations's internal APIs and 0 outsider APIs) used from the package have not been changed in these versions, thus not affecting your usage.

    Therefore, we believe that it is quite safe to loose your dependency on albumentations from "albumentations==0.4.5" to "albumentations>=0.4.0,<=0.4.5". This will improve the applicability of RTM3D and reduce the possibility of any further dependency conflict with other projects.

    May I pull a request to further loosen the dependency on albumentations?

    By the way, could you please tell us whether such an automatic tool for dependency analysis may be potentially helpful for maintaining dependencies easier during your development?

    opened by Agnes-U 0
  • Why not use center point of 3d bbox?

    Why not use center point of 3d bbox?

    Hi, @maudzung In RTM3D paper, the author use center point of 3D bbox. However, in this repo it seems to only calculate the loss of center offset, why not add the loss of center point? Without using this loss may bring bad performance. Wishing for your reply

    opened by Senwang98 0
Owner
Nguyen Mau Dzung
M.Sc. in HCI & Robotics | Self-driving Car Engineer | Senior AI Engineer | Interested in 3D Computer Vision
Nguyen Mau Dzung
SNE-RoadSeg in PyTorch, ECCV 2020

SNE-RoadSeg Introduction This is the official PyTorch implementation of SNE-RoadSeg: Incorporating Surface Normal Information into Semantic Segmentati

null 242 Dec 20, 2022
PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network"

HAN PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network" This repository is for HAN introduced in the

五维空间 140 Nov 23, 2022
[ECCV 2020] Reimplementation of 3DDFAv2, including face mesh, head pose, landmarks, and more.

Stable Head Pose Estimation and Landmark Regression via 3D Dense Face Reconstruction Reimplementation of (ECCV 2020) Towards Fast, Accurate and Stable

Remilia Scarlet 221 Dec 30, 2022
1st Place Solution to ECCV-TAO-2020: Detect and Represent Any Object for Tracking

Instead, two models for appearance modeling are included, together with the open-source BAGS model and the full set of code for inference. With this code, you can achieve around mAP@23 with TAO test set (based on our estimation).

null 79 Oct 8, 2022
Repository for Traffic Accident Benchmark for Causality Recognition (ECCV 2020)

Causality In Traffic Accident (Under Construction) Repository for Traffic Accident Benchmark for Causality Recognition (ECCV 2020) Overview Data Prepa

Tackgeun 21 Nov 20, 2022
git《Learning Pairwise Inter-Plane Relations for Piecewise Planar Reconstruction》(ECCV 2020) GitHub:

Learning Pairwise Inter-Plane Relations for Piecewise Planar Reconstruction Code for the ECCV 2020 paper by Yiming Qian and Yasutaka Furukawa Getting

null 37 Dec 4, 2022
Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks

PWLQ Updates 2020/07/16 - We are working on getting permission from our institution to release our source code. We will release it once we are granted

null 54 Dec 15, 2022
dataset for ECCV 2020 "Motion Capture from Internet Videos"

Motion Capture from Internet Videos Motion Capture from Internet Videos Junting Dong*, Qing Shuai*, Yuanqing Zhang, Xian Liu, Xiaowei Zhou, Hujun Bao

ZJU3DV 98 Dec 7, 2022
Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.

Adversarial Training Against Location-Optimized Adversarial Patches arXiv | Paper | Code | Video | Slides Code for the paper: Sukrut Rao, David Stutz,

Sukrut Rao 32 Dec 13, 2022
《Unsupervised 3D Human Pose Representation with Viewpoint and Pose Disentanglement》(ECCV 2020) GitHub: [fig9]

Unsupervised 3D Human Pose Representation [Paper] The implementation of our paper Unsupervised 3D Human Pose Representation with Viewpoint and Pose Di

null 42 Nov 24, 2022
Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)

Improving Vision-and-Language Navigation with Image-Text Pairs from the Web Arjun Majumdar, Ayush Shrivastava, Stefan Lee, Peter Anderson, Devi Parikh

Arjun Majumdar 44 Dec 14, 2022
Code for ECCV 2020 paper "Contacts and Human Dynamics from Monocular Video".

Contact and Human Dynamics from Monocular Video This is the official implementation for the ECCV 2020 spotlight paper by Davis Rempe, Leonidas J. Guib

Davis Rempe 207 Jan 5, 2023
[ECCV 2020] Gradient-Induced Co-Saliency Detection

Gradient-Induced Co-Saliency Detection Zhao Zhang*, Wenda Jin*, Jun Xu, Ming-Ming Cheng ⭐ Project Home » The official repo of the ECCV 2020 paper Grad

Zhao Zhang 35 Nov 25, 2022
Code for Towards Streaming Perception (ECCV 2020) :car:

sAP — Code for Towards Streaming Perception ECCV Best Paper Honorable Mention Award Feb 2021: Announcing the Streaming Perception Challenge (CVPR 2021

Martin Li 85 Dec 22, 2022
Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.

Who Left the Dogs Out? Evaluation and demo code for our ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization

Benjamin Biggs 29 Dec 28, 2022
Sign Language Translation with Transformers (COLING'2020, ECCV'20 SLRTP Workshop)

transformer-slt This repository gathers data and code supporting the experiments in the paper Better Sign Language Translation with STMC-Transformer.

Kayo Yin 107 Dec 27, 2022
Source code for "Progressive Transformers for End-to-End Sign Language Production" (ECCV 2020)

Progressive Transformers for End-to-End Sign Language Production Source code for "Progressive Transformers for End-to-End Sign Language Production" (B

null 58 Dec 21, 2022
IAST: Instance Adaptive Self-training for Unsupervised Domain Adaptation (ECCV 2020)

This repo is the official implementation of our paper "Instance Adaptive Self-training for Unsupervised Domain Adaptation". The purpose of this repo is to better communicate with you and respond to your questions. This repo is almost the same with Another-Version, and you can also refer to that version.

CVSM Group -  email: czhu@bupt.edu.cn 84 Dec 12, 2022
Boundary-preserving Mask R-CNN (ECCV 2020)

BMaskR-CNN This code is developed on Detectron2 Boundary-preserving Mask R-CNN ECCV 2020 Tianheng Cheng, Xinggang Wang, Lichao Huang, Wenyu Liu Video

Hust Visual Learning Team 178 Nov 28, 2022