Kernel Point Convolutions

Related tags

Deep Learning KPConv
Overview

Intro figure

Created by Hugues THOMAS

Introduction

Update 27/04/2020: New PyTorch implementation available. With SemanticKitti, and Windows supported.

This repository contains the implementation of Kernel Point Convolution (KPConv), a point convolution operator presented in our ICCV2019 paper (arXiv). If you find our work useful in your research, please consider citing:

@article{thomas2019KPConv,
    Author = {Thomas, Hugues and Qi, Charles R. and Deschaud, Jean-Emmanuel and Marcotegui, Beatriz and Goulette, Fran{\c{c}}ois and Guibas, Leonidas J.},
    Title = {KPConv: Flexible and Deformable Convolution for Point Clouds},
    Journal = {Proceedings of the IEEE International Conference on Computer Vision},
    Year = {2019}
}

Update 03/05/2019, bug found with TF 1.13 and CUDA 10. We found an internal bug inside tf.matmul operation. It returns absurd values like 1e12, leading to the apparition of NaNs in our network. We advise to use the code with CUDA 9.0 and TF 1.12. More info in issue #15

SemanticKitti Code: You can download the code used for SemanticKitti submission here. It is not clean, has very few explanations, and and could be buggy. Use it only if you are familiar with KPConv implementation.

Installation

A step-by-step installation guide for Ubuntu 16.04 is provided in INSTALL.md. Windows is currently not supported as the code uses tensorflow custom operations.

Experiments

We provide scripts for many experiments. The instructions to run these experiments are in the doc folder.

  • Object Classification: Instructions to train KP-CNN on an object classification task (Modelnet40).

  • Object Segmentation: Instructions to train KP-FCNN on an object segmentation task (ShapeNetPart)

  • Scene Segmentation: Instructions to train KP-FCNN on several scene segmentation tasks (S3DIS, Scannet, Semantic3D, NPM3D).

  • New Dataset: Instructions to train KPConv networks on your own data.

  • Pretrained models: We provide pretrained weights and instructions to load them.

  • Visualization scripts: Instructions to use the three scripts allowing to visualize: the learned features, the kernel deformations and the Effective Receptive Fields.

Performances

The following tables report the current performances on different tasks and datasets. Some scores have been improved since the article submission.

Classification and segmentation of 3D shapes

Method ModelNet40 OA ShapeNetPart classes mIoU ShapeNetPart instances mIoU
KPConv rigid 92.9% 85.0% 86.2%
KPConv deform 92.7% 85.1% 86.4%

Segmentation of 3D scenes

Method Scannet mIoU Sem3D mIoU S3DIS mIoU NPM3D mIoU
KPConv rigid 68.6% 74.6% 65.4% 72.3%
KPConv deform 68.4% 73.1% 67.1% 82.0%

Acknowledgment

Our code uses the nanoflann library.

License

Our code is released under MIT License (see LICENSE file for details).

Updates

  • 17/02/2020: Added a link to SemanticKitti code
  • 24/01/2020: Bug fixes
  • 01/10/2019: Adding visualization scripts.
  • 23/09/2019: Adding pretrained models for NPM3D and S3DIS datasets.
  • 03/05/2019: Bug found with TF 1.13 and CUDA 10.
  • 19/04/2019: Initial release.
Comments
  • NaN error during training

    NaN error during training

    Hi @HuguesTHOMAS Sorry for the bothering, have you ever met such kind of problem during training ?

    屏幕快照 2019-10-03 下午10 05 06

    My Configuration is:

    CUDA Version 9.0.176, TF1.12,  GTX 1080
    

    Like Issue15, this error also occurs randomly during training.

    opened by XuyangBai 16
  • Problems during training.

    Problems during training.

    Hi, Thanks for your sharing. I have tried your code on my own dataset but the I found that initially everything goes well but after several epochs the training suddenly broke up ( accuracy becomes 1 and the loss becomes 0 ) I use tf 1.12.0 and the cuda version is 9.0, cudnn version is 7.1.4

    # conda list | grep tensorflow
    tensorflow-estimator      1.13.0                     py_0    anaconda
    tensorflow-gpu            1.12.0                   pypi_0    pypi
    tensorflow-tensorboard    0.4.0                    pypi_0    pypi
    

    Have you met this kind of problem? Another potential problem is that sometimes the training takes 4400 MB GPU memory (see from nvidia-smi), but sometimes it takes more than 7000 MB ( and I do not change the batch size and network architecture) I am pretty confused about these problems. Could you give me some advice?

    opened by XuyangBai 15
  • Online evaluation

    Online evaluation

    I am wondering, since the data preparation happens in batch before being sent to the network, whether it would be reasonable to adapt it to run on-demand as a new cloud arrives, or whether the processing of each new input would make it prohibitively costly. Thanks

    opened by phil0stine 10
  • Question about runnning 'training_S3DIS.py '

    Question about runnning 'training_S3DIS.py '

    Hi, @HuguesTHOMAS , Firstly, thanks for your great work on KPConv. Here I have met some problems when I run 'training_S3DIS.py'. The error information is below:

    Traceback (most recent call last): File "/home/hwk/anaconda3/envs/py3.5/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1278, in _do_call return fn(*args) File "/home/hwk/anaconda3/envs/py3.5/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1263, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/home/hwk/anaconda3/envs/py3.5/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1350, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.OutOfRangeError: End of sequence [[Node: IteratorGetNext = IteratorGetNextoutput_shapes=[[?,3], [?,3], [?,3], [?,3], [?,3], ..., [?], [?,3], [?,3,3], [?], [?]], output_types=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_INT32, DT_FLOAT, DT_FLOAT, DT_INT32, DT_INT32], _device="/job:localhost/replica:0/task:0/device:CPU:0"]] [[Node: optimizer/gradients/KernelPointNetwork/layer_0/resnetb_1/conv2/concat_1_grad/GatherV2_2/axis/_222 = _HostSendT=DT_INT32, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1469_...rV2_2/axis", _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/hwk/KPConv/utils/trainer.py", line 261, in train _, L_out, L_reg, L_p, probs, labels, acc = self.sess.run(ops, {model.dropout_prob: 0.5}) File "/home/hwk/anaconda3/envs/py3.5/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 877, in run run_metadata_ptr) File "/home/hwk/anaconda3/envs/py3.5/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1100, in _run feed_dict_tensor, options, run_metadata) File "/home/hwk/anaconda3/envs/py3.5/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1272, in _do_run run_metadata) File "/home/hwk/anaconda3/envs/py3.5/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1291, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.OutOfRangeError: End of sequence [[Node: IteratorGetNext = IteratorGetNextoutput_shapes=[[?,3], [?,3], [?,3], [?,3], [?,3], ..., [?], [?,3], [?,3,3], [?], [?]], output_types=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_INT32, DT_FLOAT, DT_FLOAT, DT_INT32, DT_INT32], _device="/job:localhost/replica:0/task:0/device:CPU:0"]] [[Node: optimizer/gradients/KernelPointNetwork/layer_0/resnetb_1/conv2/concat_1_grad/GatherV2_2/axis/_222 = _HostSendT=DT_INT32, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1469_...rV2_2/axis", _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

    Caused by op 'IteratorGetNext', defined at: File "training_S3DIS.py", line 213, in dataset.init_input_pipeline(config) File "/home/hwk/KPConv/datasets/common.py", line 749, in init_input_pipeline self.flat_inputs = iter.get_next() File "/home/hwk/anaconda3/envs/py3.5/lib/python3.5/site-packages/tensorflow/python/data/ops/iterator_ops.py", line 410, in get_next name=name)), self._output_types, File "/home/hwk/anaconda3/envs/py3.5/lib/python3.5/site-packages/tensorflow/python/ops/gen_dataset_ops.py", line 2069, in iterator_get_next output_shapes=output_shapes, name=name) File "/home/hwk/anaconda3/envs/py3.5/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/home/hwk/anaconda3/envs/py3.5/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py", line 454, in new_func return func(*args, **kwargs) File "/home/hwk/anaconda3/envs/py3.5/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3155, in create_op op_def=op_def) File "/home/hwk/anaconda3/envs/py3.5/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1717, in init self._traceback = tf_stack.extract_stack()

    OutOfRangeError (see above for traceback): End of sequence [[Node: IteratorGetNext = IteratorGetNextoutput_shapes=[[?,3], [?,3], [?,3], [?,3], [?,3], ..., [?], [?,3], [?,3,3], [?], [?]], output_types=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_INT32, DT_FLOAT, DT_FLOAT, DT_INT32, DT_INT32], _device="/job:localhost/replica:0/task:0/device:CPU:0"]] [[Node: optimizer/gradients/KernelPointNetwork/layer_0/resnetb_1/conv2/concat_1_grad/GatherV2_2/axis/_222 = _HostSendT=DT_INT32, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1469_...rV2_2/axis", _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "training_S3DIS.py", line 244, in trainer.train(model, dataset) File "/home/hwk/KPConv/utils/trainer.py", line 347, in train self.cloud_validation_error(model, dataset) File "/home/hwk/KPConv/utils/trainer.py", line 806, in cloud_validation_error preds = (sub_preds[dataset.validation_proj[i_val]]).astype(np.int32) IndexError: arrays used as indices must be of integer (or boolean) type

    I am looking forward to your reply.

    opened by Hlxwk 10
  • I want to do semantic segmentation on unlabeled point cloud data (xyzrgb) using a pre-trained model.

    I want to do semantic segmentation on unlabeled point cloud data (xyzrgb) using a pre-trained model.

    Hello.

    I want to do semantic segmentation on unlabeled point cloud data (xyzrgb) using a pre-trained model. I will use a pre-trained model using S3DIS. What is input format, rgbxyz? I would like to know how to do semantic segmentation on my own data.

    Thank you.

    opened by Dsai1006 9
  • Reflexion for improvement

    Reflexion for improvement

    Dear @HuguesTHOMAS,

    I was wondering if you were going to keep working on improving the model.

    I think KPConv could be improved at least with 3 ways.

    1. The convolution definition doesn't use the distance to the central point within its formula.

    Screenshot from 2019-11-13 09-03-43

    I could have been (a): g(yi) = sum (g(norm(yi) / R) * h(yi, xk) * Wk) or (b): g(yi) = sum (g(yi)* h(yi, xk) * Wk)) (a): g takes a value between 0 and 1 and performs a guidance on it. (b): g takes yi directly and does the same

    It will be similar to this one, as an analogy for image convolution. Screenshot from 2019-11-13 09-02-00

    1. Sigma could be variable or the density of each point could be used as in

    Screenshot from 2019-11-13 09-08-18 Screenshot from 2019-11-13 09-09-05

    opened by tchaton 9
  • Poor performance using test_any_model.py on ShapeNet Part segmentation

    Poor performance using test_any_model.py on ShapeNet Part segmentation

    During training, I get mIoU=86% and mcIoU=85%, but it seems that you do not use all of data in test set. I use test_any_model.py to test the trained model at epoch 500, but I get poor performance (mIoU=40.7% and mcIoU=34.3%). How to fix this issue?

    opened by CZ-Wu 8
  • Question related to Input Preparation

    Question related to Input Preparation

    Hello, thank you for your sharing, I really appreciate your KPConv work and I am working on implement it in PyTorch. But I was confused when I try to understand the code of data preparation part because there are so many encapsulations or wrappers. So would you please give me some high level idea about the implementation. Basically my questions are

    1. About the flat_inputs variable : In my understanding, you build the flat_inputs which consists of the points and its neighbors for each layer before session run. But how can you calculate the neighbors of each layer before the specific point cloud is sent into the network?
    2. For the convolutional part, I already know how to build the matrix and calculate the output feature for the batch input, but before conv I need know the neighborhood indices of each point, I have no idea how to calculate the neighbor for each point cloud in parallel, obviously if I try to calculate the neighbor indices one by one and then concatenation it will be very time comsuing. So in your implementation, how do you solve this problem ?
    opened by XuyangBai 8
  • about online test on semanti3d

    about online test on semanti3d

    What is the file format of the prediction results that need to be uploaded when testing the semantic3D dataset online?Can you show it in detail?The website description is very unclear. looking forward to your reply.

    opened by longmalongma 7
  • IndexError: index 3 is out of bounds for axis 0 with size 3

    IndexError: index 3 is out of bounds for axis 0 with size 3

    Thanks for your great work,after I have trained, I run python visualize_features.py,I meet this problem:

    /KPConv-master2/utils/visualizer.py", line 220, in top_relu_activations l = np.argmax(np.bincount(labels[b])) IndexError: index 3 is out of bounds for axis 0 with size 3 can you help me?looking forward to your reply.

    opened by longmalongma 7
  • tensorflow.python.framework.errors_impl.InvalidArgumentError: targets[872] is out of range

    tensorflow.python.framework.errors_impl.InvalidArgumentError: targets[872] is out of range

    Thanks for your great work and nice code. I met some problems when I use the project on my own dataset,I have no idea how to deal with it. I hope you can help me. Caught a NaN error : 3 targets[872] is out of range [[node results/in_top_k/InTopKV2 (defined at /home/sanwei/下载/beifen/KPConvmy/utils/trainer.py:191) = InTopKV2[T=DT_INT32, _device="/job:localhost/replica/task:0/device:CPU:0"](KernelPointNetwork/softmax/GatherNd, IteratorGetNext:24, optimizer/gradients/range_9/delta)]] name: "results/in_top_k/InTopKV2" op: "InTopKV2" input: "KernelPointNetwork/softmax/GatherNd" input: "IteratorGetNext:24" input: "results/in_top_k/InTopKV2/k" attr { key: "T" value { type: DT_INT32 } }

    results/in_top_k/InTopKV2 ['KernelPointNetwork/softmax/GatherNd:0', 'IteratorGetNext:24', 'results/in_top_k/InTopKV2/k:0'] ['results/in_top_k/InTopKV2:0'] Traceback (most recent call last): File "/home/sanwei/anaconda3/envs/KPconv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call return fn(*args) File "/home/sanwei/anaconda3/envs/KPconv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/home/sanwei/anaconda3/envs/KPconv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: targets[872] is out of range [[{{node results/in_top_k/InTopKV2}} = InTopKV2[T=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](KernelPointNetwork/softmax/GatherNd, IteratorGetNext:24, optimizer/gradients/range_9/delta)]]

    opened by camelliawb 7
  • More features in KPCONV

    More features in KPCONV

    Hello,

    Thank you for sharing your KPCONV code! I'd like to ask if it is possible to add more features to the computation (i.e., XYZ, RGB, intensity or reflectance, no. of returns, return no.). I do not know if I am thinking right, but the changes should be don in datasets/Semantic3D.py (or other datasets.file) in line:

    https://github.com/HuguesTHOMAS/KPConv/blob/16bfbb96ac9af48a3c829d1f8123152d35b63862/datasets/Semantic3D.py#L214-L236

    by changing the structure of data file like XYZ, RGB, other and

            data = np.loadtxt(txt_file)
    
    
            points = data[:, :3].astype(np.float32)
            colors = data[:, 3:6].astype(np.uint8)
            **addvalues = data[:, 6:9].astype(np.float32)**
    

    and

                 sub_points, sub_colors, **sub_prop**, sub_labels = grid_subsampling(points,
                                                                          features=colors,
                                                                          **prop=addvalues,**
                                                                          labels=labels,
                                                                          sampleDl=0.01)
                    # Write the subsampled ply file
                    write_ply(ply_file_full, (sub_points, sub_colors, sub_prop, sub_labels), ['x', 'y', 'z', 'red', 'green', 'blue', **'reflectance', 'noreturn', 'returnno**', 'class'])
                else:
    
                    # Write the full ply file
                    write_ply(ply_file_full, (points, colors,**addvalues**), ['x', 'y', 'z', 'red', 'green', 'blue', **'reflectance', 'noreturn', 'returnno'**])
    

    then in https://github.com/HuguesTHOMAS/KPConv/blob/16bfbb96ac9af48a3c829d1f8123152d35b63862/datasets/Semantic3D.py#L288-L315 should be 1 line added:

                   data = read_ply(sub_ply_file)
                    sub_colors = np.vstack((data['red'], data['green'], data['blue'])).T
                    **sub_prop = np.vstack((data['reflectance'], data['noreturn'], data['returnno'])).T**
    

    and in else also:

               else:
    
                    # Read ply file
                    data = read_ply(file_path)
                    points = np.vstack((data['x'], data['y'], data['z'])).T
                    colors = np.vstack((data['red'], data['green'], data['blue'])).T
                    **sub_prop = np.vstack((data['reflectance'], data['noreturn'], data['returnno'])).T**
                    if cloud_split == 'test':
    

    and in

                    # Subsample cloud
                    sub_data = grid_subsampling(points,
                                                features=colors,
                                                 **prop=addvalues,**
                                                labels=int_features,
                                                sampleDl=subsampling_parameter)
    

    **But how about rescaling?

                   # Rescale float color and squeeze label
                    sub_prop = sub_data[2] / 50  #if the reflectance is used?
    

    **

    The next step is to change lines: https://github.com/HuguesTHOMAS/KPConv/blob/16bfbb96ac9af48a3c829d1f8123152d35b63862/datasets/Semantic3D.py#L327-L337

    like:

                    # Save ply
                    if cloud_split == 'test':
                        sub_labels = None
                        write_ply(sub_ply_file,
                                  [sub_data[0], sub_colors, sub_prop],
                                  ['x', 'y', 'z', 'red', 'green', 'blue', **'reflectance', 'noreturn', 'returnno'**])
                    else:
                        sub_labels = np.squeeze(sub_data[**3**])
                        write_ply(sub_ply_file,
                                  [sub_data[0], sub_colors, sub_prop, sub_labels],
                                  ['x', 'y', 'z', 'red', 'green', 'blue', **'reflectance', 'noreturn', 'returnno'**, 'class'])
    

    and: https://github.com/HuguesTHOMAS/KPConv/blob/16bfbb96ac9af48a3c829d1f8123152d35b63862/datasets/Semantic3D.py#L339-L343

                # Fill data containers
                self.input_trees[cloud_split] += [search_tree]
                self.input_colors[cloud_split] += [sub_colors]
                **self.input_addvalues[cloud_split] +=[sub_prop]**
                if cloud_split in ['training', 'validation']:
                    self.input_labels[cloud_split] += [sub_labels]
    

    then in line: https://github.com/HuguesTHOMAS/KPConv/blob/16bfbb96ac9af48a3c829d1f8123152d35b63862/datasets/Semantic3D.py#L581-L585

                    # Collect points and colors
                    input_points = (points[input_inds] - pick_point).astype(np.float32)
                    input_colors = self.input_colors[split][cloud_ind][input_inds]
                    **input_addvalues=self.input_addvalues[split][cloud_ind][input_inds]**
                    input_labels = self.input_labels[split][cloud_ind][input_inds]
                    input_labels = np.array([self.label_to_idx[l] for l in input_labels])
    

    in https://github.com/HuguesTHOMAS/KPConv/blob/16bfbb96ac9af48a3c829d1f8123152d35b63862/datasets/Semantic3D.py#L606 ** c_list += [np.hstack((input_colors, input_addvalues, input_points + pick_point))] **

    in https://github.com/HuguesTHOMAS/KPConv/blob/16bfbb96ac9af48a3c829d1f8123152d35b63862/datasets/Semantic3D.py#L675-L677

                    # Collect points and colors
                    input_points = (points[input_inds] - pick_point).astype(np.float32)
                    input_colors = self.input_colors[data_split][cloud_ind][input_inds]
                     **input_addvalues=self.input_addvalues[split][cloud_ind][input_inds]**
    

    https://github.com/HuguesTHOMAS/KPConv/blob/16bfbb96ac9af48a3c829d1f8123152d35b63862/datasets/Semantic3D.py#L703 ** c_list += [np.hstack((input_colors, input_addvalues, input_points + pick_point))] **

    Do the types and shapes have to be changed ? https://github.com/HuguesTHOMAS/KPConv/blob/16bfbb96ac9af48a3c829d1f8123152d35b63862/datasets/Semantic3D.py#L736-L738

    and the last part: https://github.com/HuguesTHOMAS/KPConv/blob/16bfbb96ac9af48a3c829d1f8123152d35b63862/datasets/Semantic3D.py#L745-L792

            def tf_map(stacked_points, stacked_colors, stacked_addvalues, point_labels, stacks_lengths, point_inds, cloud_inds):
    
    
                # Get batch indice for each point
                batch_inds = self.tf_get_batch_inds(stacks_lengths)
    
    
                # Augment input points
                stacked_points, scales, rots = self.tf_augment_input(stacked_points,
                                                                     batch_inds,
                                                                     config)
    
    
                # First add a column of 1 as feature for the network to be able to learn 3D shapes
                stacked_features = tf.ones((tf.shape(stacked_points)[0], 1), dtype=tf.float32)
    
    
                # Get coordinates and colors
                stacked_original_coordinates = stacked_colors[:, 3:]
                stacked_colors = stacked_colors[:, :3]
    **         # Get coordinates and addvalues
                stacked_original_coordinates = stacked_addvalues[:, 3:]
                stacked_addvalues = stacked_addvalues[:, :3]**
    
                # Augmentation : randomly drop colors
                if config.in_features_dim in [4, 5]:
                    num_batches = batch_inds[-1] + 1
                    s = tf.cast(tf.less(tf.random_uniform((num_batches,)), config.augment_color), tf.float32)
                    stacked_s = tf.gather(s, batch_inds)
                    stacked_colors = stacked_colors * tf.expand_dims(stacked_s, axis=1)
    
    
                # Then use positions or not
                if config.in_features_dim == 1:
                    pass
                elif config.in_features_dim == 2:
                    stacked_features = tf.concat((stacked_features, stacked_original_coordinates[:, 2:]), axis=1)
                elif config.in_features_dim == 3:
                    stacked_features = stacked_colors
                elif config.in_features_dim == 4:
                    stacked_features = tf.concat((stacked_features, stacked_colors), axis=1)
                elif config.in_features_dim == 5:
                    stacked_features = tf.concat((stacked_features, stacked_colors, stacked_original_coordinates[:, 2:]), axis=1)
                elif config.in_features_dim == 7:
                    stacked_features = tf.concat((stacked_features, stacked_colors, stacked_points), axis=1)
    **            elif config.in_features_dim == 10:
                    stacked_features = tf.concat((stacked_features, stacked_colors, stacked_addvalues, stacked_points), axis=1)**
                else:
                    raise ValueError('Only accepted input dimensions are 1, 3, 4 and 7 , 10(without and with rgb/xyz)')
    

    Is there something else should be changed or the changes should be done in different way?

    opened by ponkaz 0
  • Do you have the pretrained model for the Semantic3D dataset?

    Do you have the pretrained model for the Semantic3D dataset?

    I would like to test the KPConv on the pertrained model of Semntic3D dataset (any of semantic8 or reduced-8). Could you please share if you have the pretrained model?

    Thanks

    opened by Varatharajan-Raja 2
  • ModelNet40 implementation - (\x00\x00\x00\x00\x00)

    ModelNet40 implementation - (\x00\x00\x00\x00\x00)

    Hi Hugues, I am trying to train KPConv with the ModelNet40 dataset, but after the message "Dataset Preparation


    Loading training points", it gives me the message below and finishes the whole process without giving any error. Do you have any idea why this might have happened?

    Modelneterror

    opened by sara-y-m 0
Owner
Hugues THOMAS
AI/robotics Researcher. Postdoc at University of Toronto. Focus: Deep Learning and 3D Point clouds. Indoor navigation
Hugues THOMAS
Repo for FUZE project. I will also publish some Linux kernel LPE exploits for various real world kernel vulnerabilities here. the samples are uploaded for education purposes for red and blue teams.

Linux_kernel_exploits Some Linux kernel exploits for various real world kernel vulnerabilities here. More exploits are yet to come. This repo contains

Wei Wu 472 Dec 21, 2022
(CVPR 2021) PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds

PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds by Mutian Xu*, Runyu Ding*, Hengshuang Zhao, and Xiaojuan Qi. Int

CVMI Lab 228 Dec 25, 2022
Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)

Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021) An efficient PyTorch library for Point Cloud Completion.

Microsoft 119 Jan 2, 2023
Implementation of the "PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences" paper.

PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences Introduction Point cloud sequences are irregular and unordered in the spatial dimen

Hehe Fan 63 Dec 9, 2022
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Yi Wei 43 Dec 5, 2022
Implementation of the "Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos" paper.

Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos Introduction Point cloud videos exhibit irregularities and lack of or

Hehe Fan 101 Dec 29, 2022
Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

null 75 Nov 24, 2022
Synthetic LiDAR sequential point cloud dataset with point-wise annotations

SynLiDAR dataset: Learning From Synthetic LiDAR Sequential Point Cloud This is official repository of the SynLiDAR dataset. For technical details, ple

null 78 Dec 27, 2022
[ICCV 2021 Oral] SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer

This repository contains the source code for the paper SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer (ICCV 2021 Oral). The project page is here.

AllenXiang 65 Dec 26, 2022
Unofficial implementation of Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segmentation

Point-Unet This is an unofficial implementation of the MICCAI 2021 paper Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segment

Namt0d 9 Dec 7, 2022
Point-NeRF: Point-based Neural Radiance Fields

Point-NeRF: Point-based Neural Radiance Fields Project Sites | Paper | Primary c

Qiangeng Xu 662 Jan 1, 2023
Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral)

Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral) This is the official implementat

Yifan Zhang 259 Dec 25, 2022
PyTorch Implementation of CvT: Introducing Convolutions to Vision Transformers

CvT: Introducing Convolutions to Vision Transformers Pytorch implementation of CvT: Introducing Convolutions to Vision Transformers Usage: img = torch

Rishikesh (ऋषिकेश) 193 Jan 3, 2023
This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.

Introduction This is an official implementation of CvT: Introducing Convolutions to Vision Transformers. We present a new architecture, named Convolut

Microsoft 408 Dec 30, 2022
Implementations of orthogonal and semi-orthogonal convolutions in the Fourier domain with applications to adversarial robustness

Orthogonalizing Convolutional Layers with the Cayley Transform This repository contains implementations and source code to reproduce experiments for t

CMU Locus Lab 36 Dec 30, 2022
This is an official implementation of CvT: Introducing Convolutions to Vision Transformers.

Introduction This is an official implementation of CvT: Introducing Convolutions to Vision Transformers. We present a new architecture, named Convolut

Bin Xiao 175 Jan 8, 2023
Classify bird species based on their songs using SIamese Networks and 1D dilated convolutions.

The goal is to classify different birds species based on their songs/calls. Spectrograms have been extracted from the audio samples and used as features for classification.

Aditya Dutt 9 Dec 27, 2022
Implements an infinite sum of poisson-weighted convolutions

An infinite sum of Poisson-weighted convolutions Kyle Cranmer, Aug 2018 If viewing on GitHub, this looks better with nbviewer: click here Consider a v

Kyle Cranmer 26 Dec 7, 2022
PyTorch implementation of the R2Plus1D convolution based ResNet architecture described in the paper "A Closer Look at Spatiotemporal Convolutions for Action Recognition"

R2Plus1D-PyTorch PyTorch implementation of the R2Plus1D convolution based ResNet architecture described in the paper "A Closer Look at Spatiotemporal

Irhum Shafkat 342 Dec 16, 2022