Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)

Overview

Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)

An efficient PyTorch library for Point Cloud Completion.

Project page | Paper | Video

Chulin Xie*, Chuxin Wang*, Bo Zhang, Hao Yang, Dong Chen, and Fang Wen. (*Equal contribution)

Abstract

We proposed a novel Style-based Point Generator with Adversarial Rendering (SpareNet) for point cloud completion. Firstly, we present the channel-attentive EdgeConv to fully exploit the local structures as well as the global shape in point features. Secondly, we observe that the concatenation manner used by vanilla foldings limits its potential of generating a complex and faithful shape. Enlightened by the success of StyleGAN, we regard the shape feature as style code that modulates the normalization layers during the folding, which considerably enhances its capability. Thirdly, we realize that existing point supervisions, e.g., Chamfer Distance or Earth Mover’s Distance, cannot faithfully reflect the perceptual quality of the reconstructed points. To address this, we propose to project the completed points to depth maps with a differentiable renderer and apply adversarial training to advocate the perceptual realism under different viewpoints. Comprehensive experiments on ShapeNet and KITTI prove the effectiveness of our method, which achieves state-of-the-art quantitative performance while offering superior visual quality.

Installation

  1. Create a virtual environment via conda.

    conda create -n sparenet python=3.7
    conda activate sparenet
  2. Install torch and torchvision.

    conda install pytorch cudatoolkit=10.1 torchvision -c pytorch
  3. Install requirements.

    pip install -r requirements.txt
  4. Install cuda

    sh setup_env.sh

Dataset

  • Download the processed ShapeNet dataset generated by GRNet, and the KITTI dataset.

  • Update the file path of the datasets in configs/base_config.py:

    __C.DATASETS.shapenet.partial_points_path = "/path/to/datasets/ShapeNetCompletion/%s/partial/%s/%s/%02d.pcd"
    __C.DATASETS.shapenet.complete_points_path = "/path/to/datasets/ShapeNetCompletion/%s/complete/%s/%s.pcd"
    __C.DATASETS.kitti.partial_points_path = "/path/to/datasets/KITTI/cars/%s.pcd"
    __C.DATASETS.kitti.bounding_box_file_path = "/path/to/datasets/KITTI/bboxes/%s.txt"
    
    # Dataset Options: ShapeNet, ShapeNetCars, KITTI
    __C.DATASET.train_dataset = "ShapeNet"
    __C.DATASET.test_dataset = "ShapeNet"
    

Get Started

Inference Using Pretrained Model

The pretrained models:

Train

All log files in the training process, such as log message, checkpoints, etc, will be saved to the work directory.

  • run

    python   --gpu ${GPUS}\
             --work_dir ${WORK_DIR} \
             --model ${network} \
             --weights ${path to checkpoint}
  • example

    python  train.py --gpu 0,1,2,3 --work_dir /path/to/logfiles --model sparenet --weights /path/to/cheakpoint

Differentiable Renderer

A fully differentiable point renderer that enables end-to-end rendering from 3D point cloud to 2D depth maps. See the paper for details.

Usage of Renderer

The inputs of renderer are pcd, views and radius, and the outputs of renderer are depth_maps.

  • example
    # `projection_mode`: a str with value "perspective" or "orthorgonal"
    # `eyepos_scale`: a float that defines the distance of eyes to (0, 0, 0)
    # `image_size`: an int defining the output image size
    renderer = ComputeDepthMaps(projection_mode, eyepos_scale, image_size)
    
    # `data`: a tensor with shape [batch_size, num_points, 3]
    # `view_id`: the index of selected view satisfying 0 <= view_id < 8
    # `radius_list`: a list of floats, defining the kernel radius to render each point
    depthmaps = renderer(data, view_id, radius_list)

License

The codes and the pretrained model in this repository are under the MIT license as specified by the LICENSE file.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

BibTex

If you like our work and use the codebase or models for your research, please cite our work as follows.

@inproceedings{xie2021stylebased,
      title={Style-based Point Generator with Adversarial Rendering for Point Cloud Completion}, 
      author={Chulin Xie and Chuxin Wang and Bo Zhang and Hao Yang and Dong Chen and Fang Wen},
      booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
      year={2021},
}
Comments
  • Bug:Segmentation fault (core dumped)

    Bug:Segmentation fault (core dumped)

    Hi,Following the instructions strictly, I run test.py, report as follow:

    INFO: - Finish building dataset.
    Residualnet
    generating 2D grid
    DEBUG: - Parameters in net_G: 82562191.
    INFO: - Recovering from weight/SpareNet.pth ...
    INFO: - Recover complete. Current epoch = #150; best metrics = {'F-Score': 0.6606881591485739, 'ChamferDistance': 0.5150378787751227, 'EMD': 1.8626600753050297}.
    INFO: - Start validating.
    INFO: - Test[1/1200] Taxonomy = 02691156 Sample = e431f79ac9f0266bca677733d59db4df0 Losses = ['20.0666', '17.4211'] Metrics = ['0.6629', '0.4095', '1.7410']
    Segmentation fault (core dumped)
    
    opened by LitterWindwind 8
  • Cannot Train with Depth Maps without Chamfer Distance

    Cannot Train with Depth Maps without Chamfer Distance

    Hi, thanks for your released code. I'm interested in the your methods that training with depth map. So I design a toy problem where I only train with depth map but I find points become NaN. Do you have any suggestions? Does this mean the depth maps cannot help training?

    from p2i_utils import ComputeDepthMaps
    compute_depth_maps = ComputeDepthMaps(projection="perspective", eyepos_scale=1.0, image_size=224).float()
    
    def get_depth_render(points, requires_grad=True):
        depth_list = []
        for view_id in range(8):
            _depth = compute_depth_maps(points, view_id=view_id)
            depth_list.append(_depth)
        if requires_grad:
            return torch.cat(depth_list, dim=1)
        else:
            return torch.cat(depth_list, dim=1).detach()
    
    trg_points = load_from_file()
    input_points = torch.full([1, 2500, 3], 0.0, device=device, requires_grad=True)
    optimizer = torch.optim.SGD([input_points ], lr=1.0, momentum=0.9)
    Niter = 2000
    loss_list = []
    
    for i in range(Niter):
        optimizer.zero_grad()
        gt_depth = get_depth_render(trg_points, requires_grad=False)
        pred_depth = get_depth_render(input_points , requires_grad=True)
        loss_depth = torch.nn.L1Loss()(pred_depth, gt_depth)
        loss = loss_depth
        loss_list.append(loss.detach().cpu().item())
        loss.backward()
        optimizer.step()
    
    
    opened by Wi-sc 4
  • RuntimeError: Expected all tensors to be on the same device, but found at least two devices

    RuntimeError: Expected all tensors to be on the same device, but found at least two devices

    Thank you for sharing. Following the instructions strictly, I run trian.py, an error occurred during evaluation after training for an epoch, report as follow:

    INFO: - [Epoch 1/150][Batch 1205/1207] BatchTime = 4.602 (s) Losses = ['39.1348', '37.8132'] INFO: - [Epoch 1/150][Batch 1206/1207] BatchTime = 4.770 (s) Losses = ['38.2816', '36.3361'] INFO: - [Epoch 1/150][Batch 1207/1207] BatchTime = 4.825 (s) Losses = ['35.4861', '34.3442'] INFO: - [Epoch 1/150] EpochTime = 6249.984 (s) Losses = ['50.3621', '49.7149'] INFO: - Start validating. Traceback (most recent call last): File "train.py", line 79, in main() File "train.py", line 75, in main model.runner() File "/home/cvlab/lxz/sparenet/runners/base_runner.py", line 338, in runner self.val() File "/home/cvlab/lxz/sparenet/runners/base_runner.py", line 209, in val self.val_step(items) File "/home/cvlab/lxz/sparenet/runners/sparenet_runner.py", line 63, in val_step _, refine_ptcloud, _, _, refine_loss, coarse_loss = self.completion(data) File "/home/cvlab/lxz/sparenet/runners/sparenet_runner.py", line 102, in completion _loss = coarse_loss + middle_loss + refine_loss + expansion_penalty.mean() * 0.1 RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:4!

    Could you please provide some suggestions about this?

    opened by Francisco-liu 4
  • GPU memory leakage (CUDA out of memory after 2 epochs)

    GPU memory leakage (CUDA out of memory after 2 epochs)

    Hi, Thank you for this repo. Unfortunately, I am unable to train this repo for more than ~2 epochs, at the end I keep getting CUDA out of memory. Tracking the GPU memory consumption a slow and steady increase is visible. To check it, I have decreased the BATCH_SIZE to 6 while running on 3 RTX-2080 (10.7 g) GPUs, using nvidia docker cuda:10.1-cudnn8-devel-ubuntu18.04 and pytorch 1.8 (cuda 10.1). Running code:

    python train.py --gpu 0,1,2 --workdir /temp_out --model sparenet
    

    Have you encountered such a problem? Is some variable history not released?

    opened by orenkatzir 3
  • Unable to training

    Unable to training

    input python train.py --gpu 0 --workdir ./output/logs/ --model sparenet --weights ./output/logs/checkpoints The results are as follows(Partly as a result) INFO: - Collecting files of Taxonomy [ID=04256520, Name=sofa]
    INFO: - Collecting files of Taxonomy [ID=04379243, Name=table]
    INFO: - Collecting files of Taxonomy [ID=04530566, Name=watercraft]
    INFO: - Complete collecting files of the dataset. Total files: 231792
    INFO: - Collecting files of Taxonomy [ID=02691156, Name=airplane] INFO: - Collecting files of Taxonomy [ID=02933112, Name=cabinet]
    INFO: - Collecting files of Taxonomy [ID=02958343, Name=car]
    INFO: - Collecting files of Taxonomy [ID=03001627, Name=chair]
    INFO: - Collecting files of Taxonomy [ID=03636649, Name=lamp]
    INFO: - Collecting files of Taxonomy [ID=04256520, Name=sofa]
    INFO: - Collecting files of Taxonomy [ID=04379243, Name=table]
    INFO: - Collecting files of Taxonomy [ID=04530566, Name=watercraft]
    INFO: - Complete collecting files of the dataset. Total files: 1200
    DEBUG: - update config NUM_CLASSES: 8. INFO: - Finish building dataset. Residualnet generating 2D grid DEBUG: - Parameters in net_G: 82562191. INFO: - Recovering from ./output/logs/checkpoints ... INFO: - Recover complete. Current epoch = #150; best metrics = {'F-Score': 0.6606881591485739, 'ChamferDistance': 0.5150378787751227, 'EMD': 1.8626600753050297}. INFO: - runner time: 0.000025 (Sparenet) medialab@medialab-7910:~/workspace/YZQ/SpareNetNew$ python train.py --gpu 0 --workdir ./output/logs/ --model sparenet --weights ./output/logs/checkpoints

    Please give advice or commentsThere is no training process

    opened by 17353303313 3
  • Depth Maps cannot Supervise Training

    Depth Maps cannot Supervise Training

    Hi, thanks for your released code. I'm interested in the your methods that training with depth map. I tried to train an auto-encoder network, where the encoder and decoder are both pointnet. The input and output are point cloud. If I train the network with chamfer distance, it works well. But if I replace the loss function with L1 distance between depth maps, it cannot work. Do you have any suggestions?

    opened by Wi-sc 2
  • RuntimeError: CUDA error: out of memory

    RuntimeError: CUDA error: out of memory

    I train the network through the following command: python train.py --gpu 0,,1 --workdir log --model msn

    When I execute the command, the program stops as follows:

    THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1616554800319/work/aten/src/THC/THCCachingHostAllocator.cpp line=278 error=2 : out of memory Traceback (most recent call last): File "train.py", line 83, in main() File "train.py", line 79, in main model.runner() File "/home/pjw/projects5/SpareNet/runners/base_runner.py", line 338, in runner self.val() File "/home/pjw/projects5/SpareNet/runners/base_runner.py", line 209, in val self.val_step(items) File "/home/pjw/projects5/SpareNet/runners/msn_runner.py", line 54, in val_step , (, _, _, data) = items File "/home/pjw/projects5/SpareNet/utils/misc.py", line 18, in var_or_cuda x = x.cuda(non_blocking=True) RuntimeError: CUDA error: out of memory

    opened by peng666 2
  • FPD Metrics

    FPD Metrics

    Hi @zhangmozhe, @YANG-H,

    Congratulations and thx for your code!

    It seems your current code does not include the computation of FPD Metrics. Could you give some examples about FPD metric?

    Best, Yingjie CAI

    opened by yjcaimeow 2
  • some problem on setup.py

    some problem on setup.py

    I run sh setup_env.sh,but meet some problems as following, env:Unbuntu 20.10, cuda 11.6 (sparenet) changjunlin@changjunlin-System-Product-Name:~/下载/SpareNet-main$ sh setup_env.sh [sudo] changjunlin 的密码: 命中:1 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu impish InRelease 命中:2 http://cn.archive.ubuntu.com/ubuntu impish InRelease 获取:3 http://security.ubuntu.com/ubuntu impish-security InRelease [110 kB] 获取:4 http://cn.archive.ubuntu.com/ubuntu impish-updates InRelease [115 kB] 获取:5 http://cn.archive.ubuntu.com/ubuntu impish-backports InRelease [101 kB] 已下载 326 kB,耗时 2秒 (173 kB/s) 正在读取软件包列表... 完成 正在读取软件包列表... 完成 正在分析软件包的依赖关系树... 完成 正在读取状态信息... 完成 vim 已经是最新版 (2:8.2.2434-3ubuntu3.2)。 下列软件包是自动安装的并且现在不需要了: libatomic1:i386 libbsd0:i386 libdrm-amdgpu1:i386 libdrm-intel1:i386 libdrm-nouveau2:i386 libdrm-radeon1:i386 libdrm2:i386 libedit2:i386 libelf1:i386 libexpat1:i386 libffi8:i386 libgl1:i386 libgl1-mesa-dri:i386 libglapi-mesa:i386 libglvnd0:i386 libglx-mesa0:i386 libglx0:i386 libicu67:i386 libllvm12:i386 libmd0:i386 libnvidia-compute-470:i386 libnvidia-decode-470:i386 libnvidia-encode-470:i386 libnvidia-fbc1-470:i386 libpciaccess0:i386 libsensors5:i386 libstdc++6:i386 libvdpau1 libvulkan1:i386 libwayland-client0:i386 libx11-6:i386 libx11-xcb1:i386 libxau6:i386 libxcb-dri2-0:i386 libxcb-dri3-0:i386 libxcb-glx0:i386 libxcb-present0:i386 libxcb-randr0:i386 libxcb-shm0:i386 libxcb-sync1:i386 libxcb-xfixes0:i386 libxcb1:i386 libxdmcp6:i386 libxext6:i386 libxfixes3:i386 libxml2:i386 libxshmfence1:i386 libxxf86vm1:i386 linux-headers-5.13.0-19 linux-headers-5.13.0-19-generic linux-image-5.13.0-19-generic linux-modules-5.13.0-19-generic linux-modules-extra-5.13.0-19-generic mesa-vdpau-drivers mesa-vulkan-drivers:i386 screen-resolution-extra vdpau-driver-all 使用'sudo apt autoremove'来卸载它(它们)。 升级了 0 个软件包,新安装了 0 个软件包,要卸载 0 个软件包,有 142 个软件包未被升级。 正在读取软件包列表... 完成 正在分析软件包的依赖关系树... 完成 正在读取状态信息... 完成 tmux 已经是最新版 (3.1c-1build1)。 下列软件包是自动安装的并且现在不需要了: libatomic1:i386 libbsd0:i386 libdrm-amdgpu1:i386 libdrm-intel1:i386 libdrm-nouveau2:i386 libdrm-radeon1:i386 libdrm2:i386 libedit2:i386 libelf1:i386 libexpat1:i386 libffi8:i386 libgl1:i386 libgl1-mesa-dri:i386 libglapi-mesa:i386 libglvnd0:i386 libglx-mesa0:i386 libglx0:i386 libicu67:i386 libllvm12:i386 libmd0:i386 libnvidia-compute-470:i386 libnvidia-decode-470:i386 libnvidia-encode-470:i386 libnvidia-fbc1-470:i386 libpciaccess0:i386 libsensors5:i386 libstdc++6:i386 libvdpau1 libvulkan1:i386 libwayland-client0:i386 libx11-6:i386 libx11-xcb1:i386 libxau6:i386 libxcb-dri2-0:i386 libxcb-dri3-0:i386 libxcb-glx0:i386 libxcb-present0:i386 libxcb-randr0:i386 libxcb-shm0:i386 libxcb-sync1:i386 libxcb-xfixes0:i386 libxcb1:i386 libxdmcp6:i386 libxext6:i386 libxfixes3:i386 libxml2:i386 libxshmfence1:i386 libxxf86vm1:i386 linux-headers-5.13.0-19 linux-headers-5.13.0-19-generic linux-image-5.13.0-19-generic linux-modules-5.13.0-19-generic linux-modules-extra-5.13.0-19-generic mesa-vdpau-drivers mesa-vulkan-drivers:i386 screen-resolution-extra vdpau-driver-all 使用'sudo apt autoremove'来卸载它(它们)。 升级了 0 个软件包,新安装了 0 个软件包,要卸载 0 个软件包,有 142 个软件包未被升级。 No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda-11.6' running install running bdist_egg running egg_info writing emd.egg-info/PKG-INFO writing dependency_links to emd.egg-info/dependency_links.txt writing top-level names to emd.egg-info/top_level.txt /home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py:381: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend. warnings.warn(msg.format('we could not find ninja.')) reading manifest file 'emd.egg-info/SOURCES.txt' writing manifest file 'emd.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_ext Traceback (most recent call last): File "setup.py", line 7, in <module> cmdclass={"build_ext": BuildExtension}, File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup return distutils.core.setup(**attrs) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install.py", line 67, in run self.do_egg_install() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install.py", line 109, in do_egg_install self.run_command('bdist_egg') File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 164, in run cmd = self.call_command('install_lib', warn_dir=0) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 150, in call_command self.run_command(cmdname) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install_lib.py", line 11, in run self.build() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/command/install_lib.py", line 107, in build self.run_command('build_ext') File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 404, in build_extensions self._check_cuda_version() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 777, in _check_cuda_version torch_cuda_version = packaging.version.parse(torch.version.cuda) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/version.py", line 56, in parse return Version(version) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/version.py", line 275, in __init__ match = self._regex.search(version) TypeError: expected string or bytes-like object No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda-11.6' running install running bdist_egg running egg_info writing expansion_penalty.egg-info/PKG-INFO writing dependency_links to expansion_penalty.egg-info/dependency_links.txt writing top-level names to expansion_penalty.egg-info/top_level.txt /home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py:381: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend. warnings.warn(msg.format('we could not find ninja.')) reading manifest file 'expansion_penalty.egg-info/SOURCES.txt' writing manifest file 'expansion_penalty.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_ext Traceback (most recent call last): File "setup.py", line 9, in <module> cmdclass={"build_ext": BuildExtension}, File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup return distutils.core.setup(**attrs) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install.py", line 67, in run self.do_egg_install() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install.py", line 109, in do_egg_install self.run_command('bdist_egg') File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 164, in run cmd = self.call_command('install_lib', warn_dir=0) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 150, in call_command self.run_command(cmdname) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install_lib.py", line 11, in run self.build() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/command/install_lib.py", line 107, in build self.run_command('build_ext') File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 404, in build_extensions self._check_cuda_version() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 777, in _check_cuda_version torch_cuda_version = packaging.version.parse(torch.version.cuda) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/version.py", line 56, in parse return Version(version) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/version.py", line 275, in __init__ match = self._regex.search(version) TypeError: expected string or bytes-like object No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda-11.6' running install running bdist_egg running egg_info writing MDS.egg-info/PKG-INFO writing dependency_links to MDS.egg-info/dependency_links.txt writing top-level names to MDS.egg-info/top_level.txt /home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py:381: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend. warnings.warn(msg.format('we could not find ninja.')) reading manifest file 'MDS.egg-info/SOURCES.txt' writing manifest file 'MDS.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_ext Traceback (most recent call last): File "setup.py", line 7, in <module> cmdclass={"build_ext": BuildExtension}, File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup return distutils.core.setup(**attrs) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install.py", line 67, in run self.do_egg_install() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install.py", line 109, in do_egg_install self.run_command('bdist_egg') File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 164, in run cmd = self.call_command('install_lib', warn_dir=0) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 150, in call_command self.run_command(cmdname) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install_lib.py", line 11, in run self.build() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/command/install_lib.py", line 107, in build self.run_command('build_ext') File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 404, in build_extensions self._check_cuda_version() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 777, in _check_cuda_version torch_cuda_version = packaging.version.parse(torch.version.cuda) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/version.py", line 56, in parse return Version(version) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/version.py", line 275, in __init__ match = self._regex.search(version) TypeError: expected string or bytes-like object No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda-11.6' running install running bdist_egg running egg_info writing cubic_feature_sampling.egg-info/PKG-INFO writing dependency_links to cubic_feature_sampling.egg-info/dependency_links.txt writing top-level names to cubic_feature_sampling.egg-info/top_level.txt /home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py:381: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend. warnings.warn(msg.format('we could not find ninja.')) reading manifest file 'cubic_feature_sampling.egg-info/SOURCES.txt' writing manifest file 'cubic_feature_sampling.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_ext Traceback (most recent call last): File "setup.py", line 19, in <module> cmdclass={"build_ext": BuildExtension}, File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup return distutils.core.setup(**attrs) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install.py", line 67, in run self.do_egg_install() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install.py", line 109, in do_egg_install self.run_command('bdist_egg') File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 164, in run cmd = self.call_command('install_lib', warn_dir=0) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 150, in call_command self.run_command(cmdname) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install_lib.py", line 11, in run self.build() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/command/install_lib.py", line 107, in build self.run_command('build_ext') File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 404, in build_extensions self._check_cuda_version() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 777, in _check_cuda_version torch_cuda_version = packaging.version.parse(torch.version.cuda) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/version.py", line 56, in parse return Version(version) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/version.py", line 275, in __init__ match = self._regex.search(version) TypeError: expected string or bytes-like object No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda-11.6' running install running bdist_egg running egg_info writing gridding.egg-info/PKG-INFO writing dependency_links to gridding.egg-info/dependency_links.txt writing top-level names to gridding.egg-info/top_level.txt /home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py:381: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend. warnings.warn(msg.format('we could not find ninja.')) reading manifest file 'gridding.egg-info/SOURCES.txt' writing manifest file 'gridding.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_ext Traceback (most recent call last): File "setup.py", line 15, in <module> cmdclass={"build_ext": BuildExtension}, File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup return distutils.core.setup(**attrs) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install.py", line 67, in run self.do_egg_install() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install.py", line 109, in do_egg_install self.run_command('bdist_egg') File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 164, in run cmd = self.call_command('install_lib', warn_dir=0) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 150, in call_command self.run_command(cmdname) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install_lib.py", line 11, in run self.build() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/command/install_lib.py", line 107, in build self.run_command('build_ext') File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 404, in build_extensions self._check_cuda_version() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 777, in _check_cuda_version torch_cuda_version = packaging.version.parse(torch.version.cuda) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/version.py", line 56, in parse return Version(version) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/version.py", line 275, in __init__ match = self._regex.search(version) TypeError: expected string or bytes-like object No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda-11.6' running install running bdist_egg running egg_info writing gridding_distance.egg-info/PKG-INFO writing dependency_links to gridding_distance.egg-info/dependency_links.txt writing top-level names to gridding_distance.egg-info/top_level.txt /home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py:381: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend. warnings.warn(msg.format('we could not find ninja.')) reading manifest file 'gridding_distance.egg-info/SOURCES.txt' writing manifest file 'gridding_distance.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_ext Traceback (most recent call last): File "setup.py", line 15, in <module> cmdclass={"build_ext": BuildExtension}, File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup return distutils.core.setup(**attrs) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install.py", line 67, in run self.do_egg_install() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install.py", line 109, in do_egg_install self.run_command('bdist_egg') File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 164, in run cmd = self.call_command('install_lib', warn_dir=0) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 150, in call_command self.run_command(cmdname) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install_lib.py", line 11, in run self.build() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/command/install_lib.py", line 107, in build self.run_command('build_ext') File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 404, in build_extensions self._check_cuda_version() File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 777, in _check_cuda_version torch_cuda_version = packaging.version.parse(torch.version.cuda) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/version.py", line 56, in parse return Version(version) File "/home/changjunlin/anaconda3/envs/sparenet/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/version.py", line 275, in __init__ match = self._regex.search(version) TypeError: expected string or bytes-like object

    opened by 123456789asdfjkl 1
  • Unable to Reproduce Results

    Unable to Reproduce Results

    Hi, Great paper ! I am trying to recreate your results using selected chunks from the ScanNet dataset. I'm unable to reproduce your results using the pretrained sparenet_model for shape categories that were included in the training data. I processed the points as follows :

    diag = points.max(dim=1).values - points.min(dim=1).values # shape :  1 x N x 3
    norm = 1 / torch.linalg.norm(diag)
    c = points.mean(dim=1)
    points = (points - c) * norm
    

    it works fine for points from the val or train set from shapenet but not for clouds such as this one : input_chair I sampled ~1000 points, it always gives this kind of reconstruction for the coarse point cloud: Capture_res_chair Is there some preprocessing step I'm missing ? Thank you in advance.

    opened by chekirou 1
  • evaluation code for real dataset

    evaluation code for real dataset

    Hi, @YANG-H @msftgits @dnfclas @zhangmozhe @msftdata,

    Table4 in the main paper shows a quantitative comparison on KITTI dataset in terms of consistency, fidelity, and minimum matching distance(MMD). Could you please share the code for the evaluation of the real dataset?

    What's more, is the MMD * 10000 or *1000 or just not multiply value?

    Best, YIngjie

    opened by yjcaimeow 1
  • EMD module giving error

    EMD module giving error

    When I run the inference script with the following command , python3 test.py --gpu 0 --workdir ./ --model sparenet --weights SpareNet.pth --test_mode defaul I get this error:

    emd.forward( AttributeError: module 'emd' has no attribute 'forward' (torchenv) sarkar@devcube:~/Github/SpareNet$

    My environment is as follows: CUDA Version: 11.6 Python 3.8.10 PyTorch 1.13.0+cu116

    opened by Poulami-Sarkar 0
  • [Bug?] Issues about the dataloader and dataset.

    [Bug?] Issues about the dataloader and dataset.

    I read the code in this repo and found some questions about the dataloader part. It seems that the code of dataloder is heavily brought from the GRNet repo while the author seems to change some behaviour of the dataloader.

    Q1: The validation set of the ShapeNet is never used, while the test set is used during training https://github.com/microsoft/SpareNet/blob/main/datasets/data_loaders.py#L49

    val_data_loader = torch.utils.data.DataLoader(
                dataset=test_dataset_loader.get_dataset(DatasetSubset.TEST),
                batch_size=1,
                num_workers=cfg.CONST.num_workers,
                collate_fn=collate_fn,
                pin_memory=True,
                shuffle=False,
            )
    

    Q2: Only the last view of point cloud is used during training when training on ShapeNet with GRNet version. https://github.com/microsoft/SpareNet/blob/main/datasets/data_loaders.py#L148

    Dataset(
                {
                    "required_items": ["partial_cloud", "gtcloud"],
                    "shuffle": subset == DatasetSubset.TRAIN,
                },
                file_list,
                transforms,
            )
    

    The n_renderings param is never set so the rand_idx in Dataset.getitem is always -1 which refer to the last view of point cloud.

    opened by AlexsaseXie 0
  • Error with KITTI dataset.

    Error with KITTI dataset.

    Hi, thanks for your amazing work. We are trying to run it with KITTI dataset. Howerer, we meet some errors now. we run like this: python test.py --gpu 0 --workdir ./output --model grnet --weights ./models/GRNet-KITTI.pth --test_mode kitti

    return

    INFO: - Finish building dataset. DEBUG: - Parameters in net_G: 76707626. INFO: - Recovering from ./models/GRNet-KITTI.pth ... Traceback (most recent call last): File "test.py", line 80, in main() File "test.py", line 74, in main model = getattr(module, args.model + "Runner")(cfg, logger) File "/space2/home/chensj/projects/SpareNet/runners/grnet_runner.py", line 19, in init super().init(config, logger) File "/space2/home/chensj/projects/SpareNet/runners/base_runner.py", line 79, in init self.models_load() File "/space2/home/chensj/projects/SpareNet/runners/base_runner.py", line 108, in models_load self.init_epoch, self.best_metrics = um.model_load(self.config, self.models) File "/space2/home/chensj/projects/SpareNet/utils/misc.py", line 75, in model_load net_G.load_state_dict(checkpoint["net_G"]) # change into net_G!! KeyError: 'net_G'

    Could you please give me some advice?

    opened by csj777 0
  • problem on python setup.py install

    problem on python setup.py install

    When i cd /cuda/cubilc_feature_sampling and then I python setup.py install --user,there are some problems : sparenet) root@98a664fa2d1b:/data/sparenet/cuda/cubic_feature_sampling# python setup.py install --user running install running bdist_egg running egg_info writing cubic_feature_sampling.egg-info/PKG-INFO writing dependency_links to cubic_feature_sampling.egg-info/dependency_links.txt writing top-level names to cubic_feature_sampling.egg-info/top_level.txt reading manifest file 'cubic_feature_sampling.egg-info/SOURCES.txt' writing manifest file 'cubic_feature_sampling.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_ext building 'cubic_feature_sampling' extension Emitting ninja build file /data/sparenet/cuda/cubic_feature_sampling/build/temp.linux-x86_64-3.7/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/2] /usr/local/cuda/bin/nvcc -I/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/include -I/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/include/TH -I/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/root/miniconda3/envs/sparenet/include/python3.7m -c -c /data/sparenet/cuda/cubic_feature_sampling/cubic_feature_sampling.cu -o /data/sparenet/cuda/cubic_feature_sampling/build/temp.linux-x86_64-3.7/cubic_feature_sampling.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="gcc"' '-DPYBIND11_STDLIB="libstdcpp"' '-DPYBIND11_BUILD_ABI="cxxabi1011"' -DTORCH_EXTENSION_NAME=cubic_feature_sampling -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_37,code=compute_37 -gencode=arch=compute_37,code=sm_37 -std=c++14 FAILED: /data/sparenet/cuda/cubic_feature_sampling/build/temp.linux-x86_64-3.7/cubic_feature_sampling.o /usr/local/cuda/bin/nvcc -I/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/include -I/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/include/TH -I/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/root/miniconda3/envs/sparenet/include/python3.7m -c -c /data/sparenet/cuda/cubic_feature_sampling/cubic_feature_sampling.cu -o /data/sparenet/cuda/cubic_feature_sampling/build/temp.linux-x86_64-3.7/cubic_feature_sampling.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=cubic_feature_sampling -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_37,code=compute_37 -gencode=arch=compute_37,code=sm_37 -std=c++14 /bin/sh: /usr/local/cuda/bin/nvcc: No such file or directory [2/2] c++ -MMD -MF /data/sparenet/cuda/cubic_feature_sampling/build/temp.linux-x86_64-3.7/cubic_feature_sampling_cuda.o.d -pthread -B /root/miniconda3/envs/sparenet/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/include -I/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/include/TH -I/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/root/miniconda3/envs/sparenet/include/python3.7m -c -c /data/sparenet/cuda/cubic_feature_sampling/cubic_feature_sampling_cuda.cpp -o /data/sparenet/cuda/cubic_feature_sampling/build/temp.linux-x86_64-3.7/cubic_feature_sampling_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=cubic_feature_sampling -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 FAILED: /data/sparenet/cuda/cubic_feature_sampling/build/temp.linux-x86_64-3.7/cubic_feature_sampling_cuda.o c++ -MMD -MF /data/sparenet/cuda/cubic_feature_sampling/build/temp.linux-x86_64-3.7/cubic_feature_sampling_cuda.o.d -pthread -B /root/miniconda3/envs/sparenet/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/include -I/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/include/TH -I/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/root/miniconda3/envs/sparenet/include/python3.7m -c -c /data/sparenet/cuda/cubic_feature_sampling/cubic_feature_sampling_cuda.cpp -o /data/sparenet/cuda/cubic_feature_sampling/build/temp.linux-x86_64-3.7/cubic_feature_sampling_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=cubic_feature_sampling -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ In file included from /data/sparenet/cuda/cubic_feature_sampling/cubic_feature_sampling_cuda.cpp:9:0: /root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/include/ATen/cuda/CUDAContext.h:5:10: fatal error: cuda_runtime_api.h: No such file or directory #include <cuda_runtime_api.h> ^~~~~~~~~~~~~~~~~~~~ compilation terminated. ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1673, in _run_ninja_build env=env) File "/root/miniconda3/envs/sparenet/lib/python3.7/subprocess.py", line 512, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last): File "setup.py", line 19, in cmdclass={"build_ext": BuildExtension}, File "/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/init.py", line 153, in setup return distutils.core.setup(**attrs) File "/root/miniconda3/envs/sparenet/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/root/miniconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/root/miniconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install.py", line 67, in run self.do_egg_install() File "/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install.py", line 109, in do_egg_install self.run_command('bdist_egg') File "/root/miniconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/root/miniconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 164, in run cmd = self.call_command('install_lib', warn_dir=0) File "/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 150, in call_command self.run_command(cmdname) File "/root/miniconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/root/miniconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/install_lib.py", line 11, in run self.build() File "/root/miniconda3/envs/sparenet/lib/python3.7/distutils/command/install_lib.py", line 107, in build self.run_command('build_ext') File "/root/miniconda3/envs/sparenet/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/root/miniconda3/envs/sparenet/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/root/miniconda3/envs/sparenet/lib/python3.7/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 708, in build_extensions build_ext.build_extensions(self) File "/root/miniconda3/envs/sparenet/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions self._build_extensions_serial() File "/root/miniconda3/envs/sparenet/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial self.build_extension(ext) File "/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 202, in build_extension _build_ext.build_extension(self, ext) File "/root/miniconda3/envs/sparenet/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension depends=ext.depends) File "/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 538, in unix_wrap_ninja_compile with_cuda=with_cuda) File "/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1359, in _write_ninja_file_and_compile_objects error_prefix='Error compiling objects for extension') File "/root/miniconda3/envs/sparenet/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1683, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error compiling objects for extension

    opened by Peter199709 0
  • Error while Running with KITTI dataset.

    Error while Running with KITTI dataset.

    Hello, Thank you for your amazing work. We are students trying to inference this against KITTI dataset. However, we keeping running into the following issue when we running the following command.

    !python test.py --gpu 0 --workdir /content/drive/MyDrive/ShapeNetResults9 --model grnet --weights /content/drive/MyDrive/GRNet-KITTI.pth --test_mode kitti

    Error: File "test.py", line 80, in main() File "test.py", line 76, in main model.test() File "/content/SpareNet/runners/base_runner.py", line 351, in test self.val() File "/content/SpareNet/runners/base_runner.py", line 209, in val self.val_step(items) File "/content/SpareNet/runners/grnet_runner.py", line 59, in val_step _, _, refine_ptcloud, coarse_loss, refine_loss = self.completion(data) File "/content/SpareNet/runners/grnet_runner.py", line 80, in completion coarse_loss = self.chamfer_dist_mean(coarse_ptcloud, data["gtcloud"]).mean() KeyError: 'gtcloud'

    We noticed, that data has only "partial box" and "bounding box" and not ground truth. Is there anything we must change ?

    opened by terrybelinda 1
Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
[ICCV 2021 Oral] PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers

PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers Created by Xumin Yu*, Yongming Rao*, Ziyi Wang, Zuyan Liu, Jiwen Lu, Jie Zhou

Xumin Yu 317 Dec 26, 2022
Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

null 75 Nov 24, 2022
[CVPR 2021 Oral] Variational Relational Point Completion Network

VRCNet: Variational Relational Point Completion Network This repository contains the PyTorch implementation of the paper: Variational Relational Point

PL 121 Dec 12, 2022
Transfer style api - An API to use with Tranfer Style App, where you can use two image and transfer the style

Transfer Style API It's an API to use with Tranfer Style App, where you can use

Brian Alejandro 1 Feb 13, 2022
MVP Benchmark for Multi-View Partial Point Cloud Completion and Registration

MVP Benchmark: Multi-View Partial Point Clouds for Completion and Registration [NEWS] 2021-07-12 [NEW ?? ] The submission on Codalab starts! 2021-07-1

PL 93 Dec 21, 2022
PyTorch implementation for View-Guided Point Cloud Completion

PyTorch implementation for View-Guided Point Cloud Completion

null 22 Jan 4, 2023
PyTorch implementation of paper "IBRNet: Learning Multi-View Image-Based Rendering", CVPR 2021.

IBRNet: Learning Multi-View Image-Based Rendering PyTorch implementation of paper "IBRNet: Learning Multi-View Image-Based Rendering", CVPR 2021. IBRN

Google Interns 371 Jan 3, 2023
[ICLR 2021, Spotlight] Large Scale Image Completion via Co-Modulated Generative Adversarial Networks

Large Scale Image Completion via Co-Modulated Generative Adversarial Networks, ICLR 2021 (Spotlight) Demo | Paper [NEW!] Time to play with our interac

Shengyu Zhao 373 Jan 2, 2023
[CVPR 2021] Few-shot 3D Point Cloud Semantic Segmentation

Few-shot 3D Point Cloud Semantic Segmentation Created by Na Zhao from National University of Singapore Introduction This repository contains the PyTor

null 117 Dec 27, 2022
Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021)

Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021) This repository is for BAAF-Net introduce

null 90 Dec 29, 2022
PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition, CVPR 2018

PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place

Mikaela Uy 294 Dec 12, 2022
[CVPR 2021] Unsupervised 3D Shape Completion through GAN Inversion

ShapeInversion Paper Junzhe Zhang, Xinyi Chen, Zhongang Cai, Liang Pan, Haiyu Zhao, Shuai Yi, Chai Kiat Yeo, Bo Dai, Chen Change Loy "Unsupervised 3D

null 100 Dec 22, 2022
Official repo for AutoInt: Automatic Integration for Fast Neural Volume Rendering in CVPR 2021

AutoInt: Automatic Integration for Fast Neural Volume Rendering CVPR 2021 Project Page | Video | Paper PyTorch implementation of automatic integration

Stanford Computational Imaging Lab 149 Dec 22, 2022
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Yi Wei 43 Dec 5, 2022
Only a Matter of Style: Age Transformation Using a Style-Based Regression Model

Only a Matter of Style: Age Transformation Using a Style-Based Regression Model The task of age transformation illustrates the change of an individual

null 444 Dec 30, 2022
Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral)

Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral) This is the official implementat

Yifan Zhang 259 Dec 25, 2022
Implementation of the "PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences" paper.

PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences Introduction Point cloud sequences are irregular and unordered in the spatial dimen

Hehe Fan 63 Dec 9, 2022
Implementation of the "Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos" paper.

Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos Introduction Point cloud videos exhibit irregularities and lack of or

Hehe Fan 101 Dec 29, 2022
Synthetic LiDAR sequential point cloud dataset with point-wise annotations

SynLiDAR dataset: Learning From Synthetic LiDAR Sequential Point Cloud This is official repository of the SynLiDAR dataset. For technical details, ple

null 78 Dec 27, 2022