Pytorch implementation for "Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion" (NeurIPS 2021)

Overview

Density-aware Chamfer Distance

This repository contains the official PyTorch implementation of our paper:

Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion, NeurIPS 2021

Tong Wu, Liang Pan, Junzhe Zhang, Tai Wang, Ziwei Liu, Dahua Lin

avatar

We present a new point cloud similarity measure named Density-aware Chamfer Distance (DCD). It is derived from CD and benefits from several desirable properties: 1) it can detect disparity of density distributions and is thus a more intensive measure of similarity compared to CD; 2) it is stricter with detailed structures and significantly more computationally efficient than EMD; 3) the bounded value range encourages a more stable and reasonable evaluation over the whole test set. DCD can be used as both an evaluation metric and the training loss. We mainly validate its performance on point cloud completion in our paper.

This repository includes:

  • Implementation of Density-aware Chamfer Distance (DCD).
  • Implementation of our method for this task and the pre-trained model.

Installation

Requirements

  • PyTorch 1.2.0
  • Open3D 0.9.0
  • Other dependencies are listed in requirements.txt.

Install

Install PyTorch 1.2.0 first, and then get the other requirements by running the following command:

bash setup.sh

Dataset

We use the MVP Dataset. Please download the train set and test set and then modify the data path in data/mvp_new.py to the your own data location. Please refer to their codebase for further instructions.

Usage

Density-aware Chamfer Distance

The function for DCD calculation is defined in def calc_dcd() in utils/model_utils.py.

Users of higher PyTorch versions may try def calc_dcd() in utils_v2/model_utils.py, which has been tested on PyTorch 1.6.0 .

Model training and evaluation

  • To train a model: run python train.py ./cfgs/*.yaml, for example:
python train.py ./cfgs/vrc_plus.yaml
  • To test a model: run python train.py ./cfgs/*.yaml --test_only, for example:
python train.py ./cfgs/vrc_plus_eval.yaml --test_only
  • Config for each algorithm can be found in cfgs/.
  • run_train.sh and run_test.sh are provided for SLURM users.

We provide the following config files:

  • pcn.yaml: PCN trained with CD loss.
  • vrc.yaml: VRCNet trained with CD loss.
  • pcn_dcd.yaml: PCN trained with DCD loss.
  • vrc_dcd.yaml: VRCNet trained with DCD loss.
  • vrc_plus.yaml: training with our method.
  • vrc_plus_eval.yaml: evaluation of our method with guided down-sampling.

Attention: We empirically find that using DP or DDP for training would slightly hurt the performance. So training on multiple cards is not well supported currently.

Pre-trained models

We provide the pre-trained model that reproduce the results in our paper. Download and extract it to the ./log/pretrained/ directory, and then evaluate it with cfgs/vrc_plus_eval.yaml. The setting prob_sample: True turns on the guided down-sampling. We also provide the model for VRCNet trained with DCD loss here.

Citation

If you find our code or paper useful, please cite our paper:

@inproceedings{wu2021densityaware,
  title={Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion},
  author={Tong Wu, Liang Pan, Junzhe Zhang, Tai WANG, Ziwei Liu, Dahua Lin},
  booktitle={In Advances in Neural Information Processing Systems (NeurIPS), 2021},
  year={2021}
}

Acknowledgement

The code is based on the VRCNet implementation. We include the following PyTorch 3rd-party libraries: ChamferDistancePytorch, emd, expansion_penalty, MDS, and Pointnet2.PyTorch. Thanks for these great projects.

Contact

Please contact @wutong16 for questions, comments and reporting bugs.

Comments
  • ModuleNotFoundError: No module named 'pointnet2_cuda'

    ModuleNotFoundError: No module named 'pointnet2_cuda'

    Hi, can you help me? I come across the problem when I run train.py: Traceback (most recent call last): File "F:/github/Density_aware_Chamfer_Distance-main/train.py", line 287, in train() File "F:/github/Density_aware_Chamfer_Distance-main/train.py", line 59, in train model_module = importlib.import_module('.%s' % args.model_name, 'models') File "D:\IDE\Anaconda\envs\dcd\lib\importlib_init_.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1006, in _gcd_import File "", line 983, in _find_and_load File "", line 967, in _find_and_load_unlocked File "", line 677, in _load_unlocked File "", line 728, in exec_module File "", line 219, in _call_with_frames_removed File "F:\github\Density_aware_Chamfer_Distance-main\models\vrcnet.py", line 7, in from utils.model_utils import * File "F:\github\Density_aware_Chamfer_Distance-main\utils\model_utils.py", line 13, in import utils.Pointnet2_PyTorch.pointnet2.pointnet2_utils as pn2 File "F:\github\Density_aware_Chamfer_Distance-main\utils\Pointnet2_PyTorch\pointnet2\pointnet2_utils.py", line 9, in import pointnet2_cuda as pointnet2 ModuleNotFoundError: No module named 'pointnet2_cuda'

    Process finished with exit code 1

    opened by huangqianhong 4
  • A typo in the paper

    A typo in the paper

    Hi, Congratulate for your great work!I think there is a typo in the paper. under Equation (9),according to my understanding, it should be

    \begin{aligned}
    d_{D C D-E}\left(S_{1}, S_{2}\right) &=\frac{1}{2\left|S_{1}\right|} \sum_{x \in S_{1}}\left(1-\frac{1}{\max \left(n_{\hat{y}}/\eta, 1\right)} e^{-\alpha\|x-\hat{y}\|_{2}}\right) \\
    &+\frac{1}{2\left|S_{2}\right|} \sum_{y \in S_{2}}\left(1-\frac{1}{\bar{\eta} \cdot n_{\hat{x}}} \sum_{\hat{x} \in N(y)_{\bar{\eta}}} e^{-\alpha\|y-\hat{x}\|_{2}}\right) .
    \end{aligned}
    
    opened by 123456789asdfjkl 1
  • Batched weighting in `calc_dcd()`

    Batched weighting in `calc_dcd()`

    Thank you for sharing your work. In this PR, I modified the weighting part in calc_dcd() to a batched manner. Although the memory consumption may increase, the speed is much faster! This PR only modifies utils_v2/ because I have an issue with compiling utils/.

    The attached scripts gave the following results with my machine.

    orig: 6.357818365097046 [s] # original
    new : 0.5714225769042969 [s] # with this PR
    dcd_orig=tensor([0.5886, 0.5782, 0.5761, 0.5773, 0.5795, 0.5833, 0.5820, 0.5802, 0.5821,
            0.5912, 0.5872, 0.5813, 0.5801, 0.5830, 0.5833, 0.5876],
           device='cuda:0')
    dcd_new=tensor([0.5886, 0.5782, 0.5761, 0.5773, 0.5795, 0.5833, 0.5820, 0.5802, 0.5821,
            0.5912, 0.5872, 0.5813, 0.5801, 0.5830, 0.5833, 0.5876],
           device='cuda:0')
    
    import time
    
    import torch
    
    from utils_v2.model_utils import calc_cd
    from utils_v2.model_utils import calc_dcd as calc_dcd_orig
    
    
    def calc_dcd_new(x, gt, alpha=1000, n_lambda=1, return_raw=False, non_reg=False):
        x = x.float()
        gt = gt.float()
        _, n_x, _ = x.shape
        _, n_gt, _ = gt.shape
        assert x.shape[0] == gt.shape[0]
    
        if non_reg:
            frac_12 = max(1, n_x / n_gt)
            frac_21 = max(1, n_gt / n_x)
        else:
            frac_12 = n_x / n_gt
            frac_21 = n_gt / n_x
        cd_p, cd_t, dist1, dist2, idx1, idx2 = calc_cd(x, gt, return_raw=True)
        # dist1 (batch_size, n_gt): a gt point finds its nearest neighbour x' in x;
        # idx1  (batch_size, n_gt): the idx of x' \in [0, n_x-1]
        # dist2 and idx2: vice versa
        exp_dist1, exp_dist2 = torch.exp(-dist1 * alpha), torch.exp(-dist2 * alpha)
    
        # batched!
        count1 = torch.zeros_like(idx2)
        count1.scatter_add_(1, idx1.long(), torch.ones_like(idx1))
        weight1 = count1.gather(1, idx1.long()).float().detach() ** n_lambda
        weight1 = (weight1 + 1e-6) ** (-1) * frac_21
        loss1 = (1 - exp_dist1 * weight1).mean(dim=1)
    
        # batched!
        count2 = torch.zeros_like(idx1)
        count2.scatter_add_(1, idx2.long(), torch.ones_like(idx2))
        weight2 = count2.gather(1, idx2.long()).float().detach() ** n_lambda
        weight2 = (weight2 + 1e-6) ** (-1) * frac_12
        loss2 = (1 - exp_dist2 * weight2).mean(dim=1)
    
        loss = (loss1 + loss2) / 2
    
        res = [loss, cd_p, cd_t]
        if return_raw:
            res.extend([dist1, dist2, idx1, idx2])
    
        return res
    
    
    pc_1 = torch.randn([16, 2048, 3]).cuda() * 0.1
    pc_2 = torch.randn([16, 2048, 3]).cuda() * 0.1
    
    torch.cuda.synchronize()
    start_t = time.time()
    for _ in range(1000):
        dcd_orig, _, _ = calc_dcd_orig(pc_1, pc_2)
    torch.cuda.synchronize()
    print(f"orig: {time.time() - start_t} [s]")
    
    torch.cuda.synchronize()
    start_t = time.time()
    for _ in range(1000):
        dcd_new, _, _ = calc_dcd_new(pc_1, pc_2)
    torch.cuda.synchronize()
    print(f"new : {time.time() - start_t} [s]")
    
    print(f"{dcd_orig=}")
    print(f"{dcd_new=}")
    
    opened by kazuto1011 1
  • A typo in the paper

    A typo in the paper

    Hi, thanks for your instructive work! I think there is a typo in the paper. In the first line under Equation (4): \hat{y}=\min _{y \in S_{2}}\|x-y\|_{2}, I think it should be argmin instead.

    opened by bluestyle97 1
  • Eq. (9) in your paper

    Eq. (9) in your paper

    Hi, thanks for your great work. When the points of two point-sets are different, does your calc_dcd() correspond to equation 8 or equation 9 in your paper? "We use Eqn. 8 in training for the loss between the coarse shape with 1024 points and the ground truth with 2048 points for simplicity." If it is corresponding to equation 8, how to deal with it when the loss value is negative? Looking forward to your replay.

    opened by BingHan0458 0
  • Question about EMD

    Question about EMD

    Hello, I was wondering how exactly is EMD measured on point clouds. When it's computed on probability distributions a transport plan is the weighted sum of the distances between two points and the "mass" transported between the two points, and EMD is the optimal transport plan.

    Since point clouds are not really probability distributions I was wondering what are the weights of the weighted sum.

    opened by andrearosasco 0
Owner
Tong WU
Tong WU
The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL), NeurIPS-2021

Directed Graph Contrastive Learning The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL). In this paper, we present the first con

Tong Zekun 28 Jan 8, 2023
This is an official PyTorch implementation of Task-Adaptive Neural Network Search with Meta-Contrastive Learning (NeurIPS 2021, Spotlight).

NeurIPS 2021 (Spotlight): Task-Adaptive Neural Network Search with Meta-Contrastive Learning This is an official PyTorch implementation of Task-Adapti

Wonyong Jeong 15 Nov 21, 2022
PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

null 76 Jan 3, 2023
PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short-Term Transformer for Online Action Detection".

Long Short-Term Transformer for Online Action Detection Introduction This is a PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short

null 77 Dec 16, 2022
Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021

Introduction Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021 Prerequisites Python 3.8 and conda, get Conda CUDA 11

null 51 Dec 3, 2022
Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Ng Kam Woh 71 Dec 22, 2022
Official implementation of NeurIPS 2021 paper "Contextual Similarity Aggregation with Self-attention for Visual Re-ranking"

CSA: Contextual Similarity Aggregation with Self-attention for Visual Re-ranking PyTorch training code for CSA (Contextual Similarity Aggregation). We

Hui Wu 19 Oct 21, 2022
Official implementation of "Open-set Label Noise Can Improve Robustness Against Inherent Label Noise" (NeurIPS 2021)

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise NeurIPS 2021: This repository is the official implementation of ODNL. Require

Hongxin Wei 12 Dec 7, 2022
Official implementation of Generalized Data Weighting via Class-level Gradient Manipulation (NeurIPS 2021).

Generalized Data Weighting via Class-level Gradient Manipulation This repository is the official implementation of Generalized Data Weighting via Clas

null 9 Nov 3, 2021
A tensorflow=1.13 implementation of Deconvolutional Networks on Graph Data (NeurIPS 2021)

GDN A tensorflow=1.13 implementation of Deconvolutional Networks on Graph Data (NeurIPS 2021) Abstract In this paper, we consider an inverse problem i

null 4 Sep 13, 2022
The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

machen 11 Nov 27, 2022
Official implementation of Neural Bellman-Ford Networks (NeurIPS 2021)

NBFNet: Neural Bellman-Ford Networks This is the official codebase of the paper Neural Bellman-Ford Networks: A General Graph Neural Network Framework

MilaGraph 136 Dec 21, 2022
Official implementation of NeurIPS'2021 paper TransformerFusion

TransformerFusion: Monocular RGB Scene Reconstruction using Transformers Project Page | Paper | Video TransformerFusion: Monocular RGB Scene Reconstru

Aljaz Bozic 118 Dec 25, 2022
Official Pytorch implementation of 'GOCor: Bringing Globally Optimized Correspondence Volumes into Your Neural Network' (NeurIPS 2020)

Official implementation of GOCor This is the official implementation of our paper : GOCor: Bringing Globally Optimized Correspondence Volumes into You

Prune Truong 71 Nov 18, 2022
This is a pytorch implementation of the NeurIPS paper GAN Memory with No Forgetting.

GAN Memory for Lifelong learning This is a pytorch implementation of the NeurIPS paper GAN Memory with No Forgetting. Please consider citing our paper

Miaoyun Zhao 43 Dec 27, 2022
A PyTorch Implementation of "Watch Your Step: Learning Node Embeddings via Graph Attention" (NeurIPS 2018).

Attention Walk ⠀⠀ A PyTorch Implementation of Watch Your Step: Learning Node Embeddings via Graph Attention (NIPS 2018). Abstract Graph embedding meth

Benedek Rozemberczki 303 Dec 9, 2022
Pytorch Implementation for NeurIPS (oral) paper: Pixel Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation

Pixel-Level Cycle Association This is the Pytorch implementation of our NeurIPS 2020 Oral paper Pixel-Level Cycle Association: A New Perspective for D

null 87 Oct 19, 2022
[NeurIPS 2021] Galerkin Transformer: a linear attention without softmax

[NeurIPS 2021] Galerkin Transformer: linear attention without softmax Summary A non-numerical analyst oriented explanation on Toward Data Science abou

Shuhao Cao 159 Dec 20, 2022
Code for our NeurIPS 2021 paper Mining the Benefits of Two-stage and One-stage HOI Detection

CDN Code for our NeurIPS 2021 paper "Mining the Benefits of Two-stage and One-stage HOI Detection". Contributed by Aixi Zhang*, Yue Liao*, Si Liu, Mia

null 71 Dec 14, 2022