A more easy-to-use implementation of KPConv
This repo contains a more easy-to-use implementation of KPConv based on PyTorch.
Introduction
KPConv is a powerfull point convolution for point cloud processing. However, the original PyTorch implementation of KPConv has the following drawbacks:
- It relies on heavy data preprocessing in the dataloader
collate_fn
to downsample the input point clouds, so one has to rewrite thecollate_fn
to work with KPConv. And the data processing is computed on CPU, which may be slow if the point clouds are large (e.g., KITTI). - The network architecture and the configurations of KPConv is fixed in the config file, and only single-branch FCN architecture is supported. For more complicated tasks, this is inflexible to build up multi-branch networks.
To use KPConv in more complicated networks, we build this repo with the following modifications:
- GPU-based grid subsampling and radius neighbor searching. To accelerate kNN searching, we use KeOps. This enables us to decouple grid subsampling with data loading.
- Rebuilt KPConv interface. This enables us to insert KPConv anywhere in the network. All KPConv modules are rewritten to accept four inputs:
s_feats
: features of the support points.q_points
: coordinates of the query points.s_points
: coordinates of the support points.neighbor_indices
: the indices of the neighbors for the query points.
- Group normalization is used by default instead of batch normalization. As point clouds are stacked in KPConv, BN is hard to implement. For this reason, we use GN instead.
More examples will be provided in the future.