[CVPR 2021] Unsupervised 3D Shape Completion through GAN Inversion

Overview

ShapeInversion

Paper

Junzhe Zhang, Xinyi Chen, Zhongang Cai, Liang Pan, Haiyu Zhao, Shuai Yi, Chai Kiat Yeo, Bo Dai, Chen Change Loy "Unsupervised 3D Shape Completion through GAN Inversion" CVPR 2021

Results

Setup

Environment

conda create -n shapeinversion python=3.7
conda activate shapeinversion
pip install torch==1.2.0 torchvision==0.4.0
pip install plyfile h5py Ninja matplotlib scipy

Datasets

Our work is extensively evaluated with several existing datasets. For the virtual scan benchmark (derived from ShapeNet), we use CRN's dataset. We would suggest you to get started with this dataset. For ball-holed partial shapes, we refer to PF-Net. For PartNet dataset, we download from MPC. For real scans processed from KITTI, MatterPort3D, and ScanNet, we get from pcl2pcl.

Get started

We provide pretrained tree-GAN models for you to directly start with the inversion stage. You can download them from Google drive or Baidu cloud (password: w1n9), and put them to the pretrained_models folder.

Shape completion

You can specify other class and other datasets, like real scans provided by pcl2pcl.

python trainer.py \
--dataset CRN \
--class_choice chair \
--inversion_mode completion \
--mask_type k_mask \
--save_inversion_path ./saved_results/CRN_chair \
--ckpt_load pretrained_models/chair.pt \
--dataset_path <your_dataset_directory>

Evaluating completion results

For datasets with GT, such as the above CRN_chair:

python eval_completion.py \
--eval_with_GT true \
--saved_results_path saved_results/CRN_chair

For datasets without GT:

python eval_completion.py \
--eval_with_GT false \
--saved_results_path <your_results_on_KITTI>

Giving multiple valid outputs

ShapeInversion is able to provide multiple valid complete shapes, especially when extreme incompleteness that causes ambiguity.

python trainer.py \
--dataset CRN \
--class_choice chair \
--inversion_mode diversity \
--save_inversion_path ./saved_results/CRN_chair_diversity \
--ckpt_load pretrained_models/chair.pt \
--dataset_path <your_dataset_directory>

Shape jittering

ShapeInversion is able to change an object into other plausible shapes of different geometries.

python trainer.py \
--dataset CRN \
--class_choice plane \
--save_inversion_path ./saved_results/CRN_plane_jittering  \
--ckpt_load pretrained_models/plane.pt \
--inversion_mode jittering \
--iterations 30 30 30 30 \
--dataset_path <your_dataset_directory>

Shape morphing

ShapeInversion enables morphing between two shapes.

python trainer.py \
--dataset CRN \
--class_choice chair \
--save_inversion_path ./saved_results/CRN_chair_morphing  \
--ckpt_load pretrained_models/chair.pt \
--inversion_mode morphing \
--dataset_path <your_dataset_directory>

Pretraining

You can also pretrain tree-GAN by yourself.

python pretrain_treegan.py \
--split train \
--class_choice chair \
--FPD_path ./evaluation/pre_statistics_chair.npz \
--ckpt_path ./pretrain_checkpoints/chair \
--knn_loss True \
--dataset_path <your_dataset_directory>

NOTE:

  • The inversion stage supports distributed training by simply adding --dist. It is tested on slurm as well.
  • The hyperparameters provided may not be optimal, feel free to tune them.
  • Smaller batch size for pretraining is totally fine.

Acknowledgement

The code is in part built on tree-GAN and DGP. Besides, CD and EMD are borrowed from ChamferDistancePytorch and MSN respectively, both of which are included in the external folder for convenience.

Citation

@inproceedings{zhang2021unsupervised,
    title = {Unsupervised 3D Shape Completion through GAN Inversion},
    author = {Zhang, Junzhe and Chen, Xinyi and Cai, Zhongang and Pan, Liang and Zhao, Haiyu 
    and Yi, Shuai and Yeo, Chai Kiat and Dai, Bo and Loy, Chen Change},
    booktitle = {CVPR},
    year = {2021}}
Comments
  • About pretrained weights

    About pretrained weights

    Hi, thanks for sharing your pretrained weights. I ran code with your weight and follow your step. However, I can't get the same CD loss as your paper reported. In your paper CD loss on table is 16.2, but I got 20.8. Is this reasonable? Thanks for your reply in advance.

    opened by xljh0520 6
  • Training on new class/dataset

    Training on new class/dataset

    Hello, thank you for releasing this package. The work is very interesting.

    I want to train the network for shape completion on my own data. I have partial and full point-clouds for a large number of samples, but don't understand how the currently supported formats are generated. For example, CRN seems to come as .h5 files.

    What is the best way to go from partial and full point clouds (.ply or .obj or other) to the proper input format? After that point I'd presumably run the trainer and point it to my new class choice and dataset location.

    opened by marcusabate 6
  • real-world dataset training and evaluation

    real-world dataset training and evaluation

    Dear Dr. Zhang, Thanks for sharing your interesting work. I'm currently working on the same task and want to compare the performance with your ShapeInversion model. I'd like to know when training and evaluating your model on a real-world dataset, are you sampling the same number of 2048 points for each shape?

    opened by lx709 4
  • CUDA out of memory when evaluating result

    CUDA out of memory when evaluating result

    Hi, I've successfully run the training process, but encountered a problem during evaluation. I'm not sure if it is just because I don't have a large nough GPU. Here is the Traceback: (shapeinversion) xieyuqiu@SYS-4029GP-TRT:~/shape-inversion$ python eval_completion.py --eval_with_GT true --saved_results_path saved_results/CRN_chair Traceback (most recent call last): File "eval_completion.py", line 202, in eval_completion_with_gt(args.saved_results_path) File "eval_completion.py", line 160, in eval_completion_with_gt cd_ls, acc_ls, comp_ls, f1_ls = compute_4_metrics(ours_gt, ours_output) File "eval_completion.py", line 56, in compute_4_metrics dist1, dist2 , _, _ = distChamfer(pcn_gt, pcn_output) File "/data2/xieyuqiu/shape-inversion/external/ChamferDistancePytorch/chamfer_python.py", line 35, in distChamfer zz = torch.bmm(x, y.transpose(2, 1)) RuntimeError: CUDA out of memory. Tried to allocate 4.69 GiB (GPU 0; 11.77 GiB total capacity; 29.38 MiB already allocated; 3.85 GiB free; 40.00 MiB reserved in total by PyTorch)

    opened by satoko5793 2
  • A naive question

    A naive question

    Hello, I have some questions about the paper. What is the input of your article during testing, is it a partial shape? And how is it encoded into a code z to generate a completed shape?

    opened by duzhenjiang113 2
  • CUBLAS_STATUS_EXECUTION_FAILED while training the model

    CUBLAS_STATUS_EXECUTION_FAILED while training the model

    I was trying to train the model with this code:

    python trainer.py \
    --dataset CRN \
    --class_choice chair \
    --inversion_mode completion \
    --mask_type k_mask \
    --save_inversion_path ./saved_results/CRN_chair \
    --ckpt_load pretrained_models/chair.pt \
    --dataset_path data_dir/CRN/
    

    Then, I get the following error:

    Traceback (most recent call last):
      File "trainer.py", line 300, in <module>
        trainer.run()
      File "trainer.py", line 90, in run
        self.train()
      File "trainer.py", line 133, in train
        self.model.select_z(select_y=False)
      File "/home/$USER/shape-inversion/shape_inversion.py", line 279, in select_z
        x = self.G(tree)
      File "/home/$USER/miniconda3/envs/shapeinversion/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/$USER/shape-inversion/model/treegan_network.py", line 63, in forward
        feat = self.gcn(tree)
      File "/home/$USER/miniconda3/envs/shapeinversion/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/$USER/miniconda3/envs/shapeinversion/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
        input = module(input)
      File "/home/$USER/miniconda3/envs/shapeinversion/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/$USER/shape-inversion/model/gcn.py", line 56, in forward
        root_node = self.W_root[inx](tree[inx])
      File "/home/$USER/miniconda3/envs/shapeinversion/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/$USER/miniconda3/envs/shapeinversion/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 87, in forward
        return F.linear(input, self.weight, self.bias)
      File "/home/$USER/miniconda3/envs/shapeinversion/lib/python3.7/site-packages/torch/nn/functional.py", line 1371, in linear
        output = input.matmul(weight.t())
    RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
    

    Environment: System: Ubuntu 20.04.3 Python: 3.7.11 PyTorch: 1.2.0 CUDA Version: 11.4 gcc version 9.3.0

    Any help will be greatly appreciated! Thank you!

    opened by nama1arpit 1
  • ChamferDistancePytorch functions require too much memory

    ChamferDistancePytorch functions require too much memory

    When trying to run trainer in completion mode, CUDA runs out of memory very quickly. I'm running this on an 8GB GPU, but CUDA is asking for over 15GB. This happens whenever calls to distChamfer and distChamfer_raw are made.

    Is there a recommended setting to run shape-inversion on smaller machines before going to a larger computer cluster? It would be great if I could train remote and then complete shapes locally even if the full evaluation isn't done in the loop, as I can always evaluate afterwards.

    Thank you.

    opened by marcusabate 1
  • CRN dataset scale error

    CRN dataset scale error

    Hi, @junzhezhang

    Do you notice the scale problem of CRN paper's data? The scales are not match for partial and complete.

    And how do you deal with the problem, since your main dataset is from CRN.

    Here is the issue you can reference. https://github.com/xiaogangw/cascaded-point-completion/issues/11

    Best, Yingjie CAI

    opened by yjcaimeow 1
Owner
null
A Simplied Framework of GAN Inversion

Framework of GAN Inversion Introcuction You can implement your own inversion idea using our repo. We offer a full range of tuning settings (in hparams

Kangneng Zhou 13 Sep 27, 2022
Style-based Neural Drum Synthesis with GAN inversion

Style-based Drum Synthesis with GAN Inversion Demo TensorFlow implementation of a style-based version of the adversarial drum synth (ADS) from the pap

Sound and Music Analysis (SoMA) Group 29 Nov 19, 2022
Implementation for HFGI: High-Fidelity GAN Inversion for Image Attribute Editing

HFGI: High-Fidelity GAN Inversion for Image Attribute Editing High-Fidelity GAN Inversion for Image Attribute Editing Update: We released the inferenc

Tengfei Wang 371 Dec 30, 2022
Official implementation for "Style Transformer for Image Inversion and Editing" (CVPR 2022)

Style Transformer for Image Inversion and Editing (CVPR2022) https://arxiv.org/abs/2203.07932 Existing GAN inversion methods fail to provide latent co

Xueqi Hu 153 Dec 2, 2022
[IJCAI-2021] A benchmark of data-free knowledge distillation from paper "Contrastive Model Inversion for Data-Free Knowledge Distillation"

DataFree A benchmark of data-free knowledge distillation from paper "Contrastive Model Inversion for Data-Free Knowledge Distillation" Authors: Gongfa

ZJU-VIPA 47 Jan 9, 2023
Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)

Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021) An efficient PyTorch library for Point Cloud Completion.

Microsoft 119 Jan 2, 2023
[CVPR 2021 Oral] Variational Relational Point Completion Network

VRCNet: Variational Relational Point Completion Network This repository contains the PyTorch implementation of the paper: Variational Relational Point

PL 121 Dec 12, 2022
PyTorch 1.5 implementation for paper DECOR-GAN: 3D Shape Detailization by Conditional Refinement.

DECOR-GAN PyTorch 1.5 implementation for paper DECOR-GAN: 3D Shape Detailization by Conditional Refinement, Zhiqin Chen, Vladimir G. Kim, Matthew Fish

Zhiqin Chen 72 Dec 31, 2022
Current state of supervised and unsupervised depth completion methods

Awesome Depth Completion Table of Contents About Sparse-to-Dense Depth Completion Current State of Depth Completion Unsupervised VOID Benchmark Superv

null 224 Dec 28, 2022
DeFMO: Deblurring and Shape Recovery of Fast Moving Objects (CVPR 2021)

Evaluation, Training, Demo, and Inference of DeFMO DeFMO: Deblurring and Shape Recovery of Fast Moving Objects (CVPR 2021) Denys Rozumnyi, Martin R. O

Denys Rozumnyi 139 Dec 26, 2022
Code for "Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks", CVPR 2021

Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks This repository contains the code that accompanies our CVPR 20

Despoina Paschalidou 161 Dec 20, 2022
Official Pytorch implementation of "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video", CVPR 2021

TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video Qualtitative result Paper teaser video Introduction This r

Hongsuk Choi 215 Jan 6, 2023
Code for "LASR: Learning Articulated Shape Reconstruction from a Monocular Video". CVPR 2021.

LASR Installation Build with conda conda env create -f lasr.yml conda activate lasr # install softras cd third_party/softras; python setup.py install;

Google 157 Dec 26, 2022
Joint Learning of 3D Shape Retrieval and Deformation, CVPR 2021

Joint Learning of 3D Shape Retrieval and Deformation Joint Learning of 3D Shape Retrieval and Deformation Mikaela Angelina Uy, Vladimir G. Kim, Minhyu

Mikaela Uy 38 Oct 18, 2022
Chunkmogrify: Real image inversion via Segments

Chunkmogrify: Real image inversion via Segments Teaser video with live editing sessions can be found here This code demonstrates the ideas discussed i

David Futschik 112 Jan 4, 2023
KeypointDeformer: Unsupervised 3D Keypoint Discovery for Shape Control

KeypointDeformer: Unsupervised 3D Keypoint Discovery for Shape Control Tomas Jakab, Richard Tucker, Ameesh Makadia, Jiajun Wu, Noah Snavely, Angjoo Ka

Tomas Jakab 87 Nov 30, 2022
This repo is a PyTorch implementation for Paper "Unsupervised Learning for Cuboid Shape Abstraction via Joint Segmentation from Point Clouds"

Unsupervised Learning for Cuboid Shape Abstraction via Joint Segmentation from Point Clouds This repository is a PyTorch implementation for paper: Uns

Kaizhi Yang 42 Dec 9, 2022
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization

FuseDream This repo contains code for our paper (paper link): FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimizat

XCL 191 Dec 31, 2022
DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time

DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time Introduction This is official implementation for DR-GAN (IEEE TCS

Kang Liao 18 Dec 23, 2022