3D AffordanceNet
This repository is the official experiment implementation of 3D AffordanceNet benchmark.
3D AffordanceNet is a 3D point cloud benchmark consisting of 23k shapes from 23 semantic object categories, annotated with 56k affordance annotations and covering 18 visual affordance categories.
This repository implements two baseline methods: PointNet++ and DGCNN on four proposed affordance understanding tasks: Full-Shape, Partial-View, Rotation-Invariant, Semi-Supervised Affordance Estimation.
You can reproduce the performances described in the origin paper by simply running a command down below.
[CVPR 2021 Paper] [Dataset Download Link] [Project Page]
Requirements
All the codes are tested in the following environment:
- Linux (tested on Ubuntu 16.04)
- Python 3.7+
- PyTorch 1.0.1
- Gorilla-Core
- CUDA 10.0 or higher
You can install the required packages by running the following command:
pip install -r requirement.txt
To install the cuda kernel, go to models/pointnet2_ops and run the following command:
python setup.py build_ext --inplace
Quick Start
The following set up is for DGCNN, you can change to PointNet++ accordingly.
First download the whole dataset from here and extract the files to the data_root
, then modify the dataset data_root
in configuration(full-shape for example), the dataset data_root
should obey the data structure below:
data_root
├── task_train_data.pkl
├── task_val_data.pkl
└── task_test_data.pkl
Then to train a model from scratch:
python train.py config/dgcnn/estimation_cfg.py --work_dir TPATH_TO_LOG_DIR --gpu 0,1
After training, to test a model:
python test.py config/dgcnn/estimation_cfg.py --work_dir PATH_TO_LOG_DIR --gpu 0,1 --checkpoint PATH_TO_CHECKPOINT
Currently Support
- Models
- DGCNN
- PointNet++
- Tasks
- Full-Shape Affordance Estimation
- Partial-View Affordance Estimation
- Rotation-Invariant Affordance Estimation
- Semi-Supervised Affordance Estimation