DGCNN - Dynamic Graph CNN for Learning on Point Clouds

Related tags

Deep Learning dgcnn
Overview

Dynamic Graph CNN for Learning on Point Clouds

We propose a new neural network module dubbed EdgeConv suitable for CNN-based high-level tasks on point clouds including classification and segmentation. EdgeConv is differentiable and can be plugged into existing architectures.

[Project] [Paper]

Overview

DGCNN is the author's re-implementation of Dynamic Graph CNN, which achieves state-of-the-art performance on point-cloud-related high-level tasks including category classification, semantic segmentation and part segmentation.

Further information please contact Yue Wang and Yongbin Sun.

Author's Implementations

The classification experiments in our paper are done with the pytorch implementation.

Other Implementations

Citation

Please cite this paper if you want to use it in your work,

@article{dgcnn,
  title={Dynamic Graph CNN for Learning on Point Clouds},
  author={Wang, Yue and Sun, Yongbin and Liu, Ziwei and Sarma, Sanjay E. and Bronstein, Michael M. and Solomon, Justin M.},
  journal={ACM Transactions on Graphics (TOG)},
  year={2019}
}

License

MIT License

Acknowledgement

The structure of this codebase is borrowed from PointNet.

Comments
  • "The number of GPUs to use" in sem_seg with train.py

    Hello, Thank you for sharing this code, it's amazing! Sorry, I have some question about train.py in sem_seg folder, When I run "sh +x train_job.sh" , cmd show this code: "Traceback (most recent call last): File "train.py", line 289, in train() File "train.py", line 238, in train train_one_epoch(sess, ops, train_writer) File "train.py", line 271, in train_one_epoch ops['pointclouds_phs'][1]: current_data[start_idx_1:end_idx_1, :, :], IndexError: list index out of range"

    I check train.py parameters, and find a probably reason for GPU use number: parser.add_argument('--num_gpu', type=int, default=1, help='the number of GPUs to use [default: 2]') I just one NVIDIA 1050Ti, so I change default=2 to 1,is that mean I just buy more graphics card to fix this question? THANKS a lot!

    opened by suan0365006 12
  • KeyError:

    KeyError: "Unable to open object (object 'data' doesn't exist)"

    Thanks for your awesome code share

    I run the train.py code following readme step by step, but when I run python train.py, there is an error:KeyError: "Unable to open object (object 'data' doesn't exist)", here is details:

    image

    I solve all the problem of dependency but above error keep showing

    I run the pointnet(https://github.com/charlesq34/pointnet) without error, however, I cannot run dgcnn...

    please help me, so I can study about dgcnn more

    sorry for my poor english skills

    opened by hjsg1010 10
  • The code is running super slow?

    The code is running super slow?

    @WangYueFt @syb7573330 I could run the code successfully, but the code is running super slow. The speed is about 10 epochs/day. Do you have any idea about this problem or it is the normal speed for this code?

    opened by Haiyan-Chris-Wang 6
  • Potential discrepancy between training and testing for part segmentation

    Potential discrepancy between training and testing for part segmentation

    Dear Wang,

    I really liked your paper and thanks for sharing your code. I think there is a potential discrepancy between the training and test setup for part segmentation. It would be great if you can please have a look and clarify a few doubts I have.

    • In part_seg/test.py, the point cloud is normalized before feeding into the network. While I don't find this being done in part_seg/train_multi_gpu.py. I feel it might hurt performance. Am I missing something here? source: https://github.com/WangYueFt/dgcnn/blob/master/tensorflow/part_seg/test.py#L185

    • What is the purpose of the pc_augment_to_point_num? I understand that you remove the extra-points later but won't the network prediction change upon augmenting extra points? source: https://github.com/WangYueFt/dgcnn/blob/master/tensorflow/part_seg/test.py#L185

    Looking forward to your response. Best, Ankit

    opened by imankgoyal 5
  • reproduce the classification result with pytorch

    reproduce the classification result with pytorch

    I run the pytorch code with the script python main.py --exp_name=dgcnn_1024 --model=dgcnn --num_points=1024 --k=20 --use_sgd=True And I always get results slightly worse than the reported results in the paper. I used the best test results in the training process. Especially, for average acc (mean class acc), the gap with the reported ones is larger. Are there any special settings or tricks in running the code? Thanks in advance.

    opened by cslxiao 5
  • accuracy about classification

    accuracy about classification

    Hi,when I run the tensorflow code.I just got the accuracy of 91.2% .I read the paper published in 2018,the result is as sama sa the baseline .I want to the resaon.thanks!

    opened by grxiao 5
  • the difference between fixed knn graph and dynamic knn graph?

    the difference between fixed knn graph and dynamic knn graph?

    @WangYueFt I find that you compare the result with baseline in the paper. As you mentioned, the baseline is using fixed knn graph rather dynamic graph. So could you help me explain what is the difference between fixed knn graph and dynamic knn graph?

    opened by Haiyan-Chris-Wang 5
  • InternalError (see above for traceback): Blas xGEMM launch failed

    InternalError (see above for traceback): Blas xGEMM launch failed

    Hello,thank you for your reply,when I try to run code about sem_seg,I meet this problem,and I have one gpu(8gmemory),can you tell me how to solve this problem?looking forward your reply. InternalError (see above for traceback): Blas xGEMM launch failed : a.shape=[1,4096,3], b.shape=[1,3,4096], m=4096, n=4096, k=3 [[Node: tower_0/MatMul = BatchMatMul[T=DT_FLOAT, adj_x=false, adj_y=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](tower_0/ExpandDims_1, tower_0/transpose)]]

    opened by longmalongma 4
  • part segmentation

    part segmentation

    Hi,

    I am trying to reproduce your results showing in the paper with your code but I am not able to do it. Would you mind releasing your trained model for shapenet part segmentation task?

    Thanks!

    opened by zaiweizhang 4
  • How did you calculate forward time for several models?

    How did you calculate forward time for several models?

    Hi, first, sorry for keep asking about your research..

    I'm curious about how to calculate forward time(or operation time?) for some models as shown at Table 3 on your paper.

    And does that value means computational time for one epoch?


    and What effect did you expect by considering 'categorical vector'? I just wonder how you came up with this interesting idea.

    opened by hjsg1010 3
  • How do you visualize your segmentation outputs?

    How do you visualize your segmentation outputs?

    Hi, I am impressed by your research and studying. I have a question for visualizing your segmentation outputs. I did some classification deeplearning models, but this is first time for segmentation.

    I want to visualize outptus such as Figure6 and Figure 7 on your paper.

    Did you use some tools such as MATLAB?

    And what should I use for input for visualize?

    Sorry for my poor English skills.

    Have a nice day

    opened by hjsg1010 3
  • ValueError: need at least one array to concatenate

    ValueError: need at least one array to concatenate

    File "C:\Users\ianph\dgcnn\pytorch\main.py", line 225, in train(args, io) File "C:\Users\ianph\dgcnn\pytorch\main.py", line 40, in train train_loader = DataLoader(ModelNet40(partition='train', num_points=args.num_points), num_workers=8, File "C:\Users\ianph\dgcnn\pytorch\data.py", line 66, in init self.data, self.label = load_data(partition) File "C:\Users\ianph\dgcnn\pytorch\data.py", line 45, in load_data all_data = np.concatenate(all_data, axis=0) File "<array_function internals>", line 180, in concatenate

    opened by ianphoi 0
  • Why is the accuracy rate so low?

    Why is the accuracy rate so low?

    Train 26, loss: 3.676545, train acc: 0.075407, train avg acc: 0.030953 Test 26, loss: 3.640235, test acc: 0.042139, test avg acc: 0.026000 Train 27, loss: 3.671733, train acc: 0.072358, train avg acc: 0.030758 Test 27, loss: 3.637559, test acc: 0.044976, test avg acc: 0.027750 Train 28, loss: 3.675745, train acc: 0.073272, train avg acc: 0.031713 Test 28, loss: 3.636188, test acc: 0.068071, test avg acc: 0.042000 Train 29, loss: 3.691305, train acc: 0.071545, train avg acc: 0.030454

    opened by sunhufei 1
  • Aborted (core dumped) if I process to many points at once

    Aborted (core dumped) if I process to many points at once

    I plugged the DGCNN model into my semantic segmentation framework in which I use other models like PointNet or PointNet++ without problems. At training time everything is fine and I get pretty good accuracies for my Airborne LiDAR data (here I randomly sample 8192 points for each tile so everything is good). However at test time I want to predict all points inside one tile and I get a memory error for a tile with more than 50000 points.

    Aborted (core dumped)

    I guess the problem is in the pairwise_distance function. This function calculates a adjacency matrix and I think my gpu memory cant handle an array with the shape of 50000 x 50000. I understand that the tf.matmul function is very fast on gpu but I would like to try a workaround which purely calculates the k nearest neighbors without this huge memory overhead. Is there anything like this? I know how to use a KDTree in normal python but I have not found a way yet to use it with tensorflow placeholders...

    I know I can work around this problem by using smaller tiles or downsample my point clouds but I really would like to fix this internally...

    I list some basic information about my implementation here:

    • python 3.6
    • tensorflow 1.15
    • batch size is already 1 at test time

    Thanks in advance for any tips!

    opened by fnardmann 2
  • How to add more DGCNN layers in your implementation?

    How to add more DGCNN layers in your implementation?

    From my point of view, since your implementation didn't use the updated node embeddings as input between epochs, it can be seen as a one layer model, right?

    So how to add more layers in your model? Have you ever done some experiments about the performance of different layers?

    opened by hibayesian 0
  • LiDAR Point Cloud Classification results not good with real data

    LiDAR Point Cloud Classification results not good with real data

    Dear all,

    I am using DGCNN to classify LiDAR pointClouds. I have trained the model using ModelNet40 train data(2048 points, 250 epochs) and results are good when I try to classify objects using ModelNet40 test data.

    But when I try to classify real data collected by velodyne sensor the prediction is mostly wrong. Please find the attached example. Most of the times I get output as Plant, Guitar or Stairs. I have shifted my objects to center of the coordinate frame and have normalized the values[-1,1]. I have even tried to clean the boundaries.

    Can somebody suggest me what I could be doing wrong? Person_local Chair_local

    opened by manishmaruthi 1
Owner
Wang, Yue
Wang, Yue
Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral)

Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral) This is the official implementat

Yifan Zhang 259 Dec 25, 2022
(CVPR 2021) PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds

PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds by Mutian Xu*, Runyu Ding*, Hengshuang Zhao, and Xiaojuan Qi. Int

CVMI Lab 228 Dec 25, 2022
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Yi Wei 43 Dec 5, 2022
Object DGCNN and DETR3D, Our implementations are built on top of MMdetection3D.

This repo contains the implementations of Object DGCNN (https://arxiv.org/abs/2110.06923) and DETR3D (https://arxiv.org/abs/2110.06922). Our implementations are built on top of MMdetection3D.

Wang, Yue 539 Jan 7, 2023
Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces(ICML 2021)

Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces(ICML 2021) This repository contains the code

null 149 Dec 15, 2022
Self-Supervised Learning for Domain Adaptation on Point-Clouds

Self-Supervised Learning for Domain Adaptation on Point-Clouds Introduction Self-supervised learning (SSL) allows to learn useful representations from

Idan Achituve 66 Dec 20, 2022
Code for "CloudAAE: Learning 6D Object Pose Regression with On-line Data Synthesis on Point Clouds" @ICRA2021

CloudAAE This is an tensorflow implementation of "CloudAAE: Learning 6D Object Pose Regression with On-line Data Synthesis on Point Clouds" Files log:

Gee 35 Nov 14, 2022
This repo is a PyTorch implementation for Paper "Unsupervised Learning for Cuboid Shape Abstraction via Joint Segmentation from Point Clouds"

Unsupervised Learning for Cuboid Shape Abstraction via Joint Segmentation from Point Clouds This repository is a PyTorch implementation for paper: Uns

Kaizhi Yang 42 Dec 9, 2022
Deep Learning for 3D Point Clouds: A Survey (IEEE TPAMI, 2020)

??Deep Learning for 3D Point Clouds (IEEE TPAMI, 2020)

Qingyong 1.4k Jan 8, 2023
Code Release for ICCV 2021 (oral), "AdaFit: Rethinking Learning-based Normal Estimation on Point Clouds"

AdaFit: Rethinking Learning-based Normal Estimation on Point Clouds (ICCV 2021 oral) **Project Page | Arxiv ** Runsong Zhu¹, Yuan Liu², Zhen Dong¹, Te

null 40 Dec 30, 2022
Code for the paper "Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds" (ICCV 2021)

Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds This is the official code implementation for the paper "Spatio-temporal Se

Hesper 63 Jan 5, 2023
Code for the paper SphereRPN: Learning Spheres for High-Quality Region Proposals on 3D Point Clouds Object Detection, ICIP 2021.

SphereRPN Code for the paper SphereRPN: Learning Spheres for High-Quality Region Proposals on 3D Point Clouds Object Detection, ICIP 2021. Authors: Th

Thang Vu 15 Dec 2, 2022
Public repository of the 3DV 2021 paper "Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds"

Generative Zero-Shot Learning for Semantic Segmentation of 3D Point Clouds Björn Michele1), Alexandre Boulch1), Gilles Puy1), Maxime Bucher1) and Rena

valeo.ai 15 Dec 22, 2022
Implementation of CVPR'2022:Surface Reconstruction from Point Clouds by Learning Predictive Context Priors

Surface Reconstruction from Point Clouds by Learning Predictive Context Priors (CVPR 2022) Personal Web Pages | Paper | Project Page This repository c

null 136 Dec 12, 2022
NFT-Price-Prediction-CNN - Using visual feature extraction, prices of NFTs are predicted via CNN (Alexnet and Resnet) architectures.

NFT-Price-Prediction-CNN - Using visual feature extraction, prices of NFTs are predicted via CNN (Alexnet and Resnet) architectures.

null 5 Nov 3, 2022
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

Haotong Qin 59 Dec 17, 2022
Official PyTorch implementation of CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds

CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds Introduction This is the official PyTorch implementation of o

Yijia Weng 96 Dec 7, 2022
(CVPR 2021) Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds

BRNet Introduction This is a release of the code of our paper Back-tracing Representative Points for Voting-based 3D Object Detection in Point Clouds,

null 86 Oct 5, 2022
Rendering Point Clouds with Compute Shaders

Compute Shader Based Point Cloud Rendering This repository contains the source code to our techreport: Rendering Point Clouds with Compute Shaders and

Markus Schütz 460 Jan 5, 2023