The official implementation of CSG-Stump: A Learning Friendly CSG-Like Representation for Interpretable Shape Parsing

Overview

CSGStumpNet

The official implementation of CSG-Stump: A Learning Friendly CSG-Like Representation for Interpretable Shape Parsing

Paper | Project page

Citation

If you find our work interesting and benifits your research, please consider citing:

@article{ren2021csgstump,
  title={CSG-Stump: A Learning Friendly CSG-Like Representation for Interpretable Shape Parsing},
  author={Daxuan Ren, Jianmin Zheng, Jianfei Cai, Jiatong Li, Haiyong Jiang, Zhongang Cai, Junzhe Zhang, Liang Pan, Mingyuan Zhang, Haiyu Zhao, Shuai Yi},
  journal={ICCV},
  year={2021}
}

Setup

Install envoriment:

We recommand using Anaconda to set the envoriment, once Anacodna in installed, run the following command.

conda create --name CSGStumpNet python=3.7
conda activate CSGStumpNet
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 -c pytorch
conda install -c open3d-admin open3d=0.9
conda install numpy
conda install pymcubes
conda install tensorboard
conda install scipy
pip install tqdm

Datasets and pre-trained weights

Dataset

We use the pre-prepared dataset from OccNet(consider citing them), you can download the data by

mkdir data
cd data
wget https://s3.eu-central-1.amazonaws.com/avg-projects/occupancy_networks/data/dataset_small_v1.1.zip
unzip dataset_small_v1.1.zip

If you want to prepare data yourself (maybe you want to generate the watertight mesh etc.), please refer to this link.

Pre-Train Weights

The original trained weights are no longer compatiable with the the restructred code (this repo) :( re-training is in progress (This may take some time).

Evaluate using pre-trian weights

python eval.py --config_path ./configs/plane_256_256.json

Train from stratch

python train.py --config_path ./configs/plane_256_256.json

Evaluation

python metrics.py --config_path ./configs/plane_256_256.json

License

This project is licensed under the terms of the MIT license (see LICENSE for details).

Comments
  • Release code of complement layer

    Release code of complement layer

    Hi, I am interested in your work.

    However, I find the first complement layer code is missing in your code model.py.

    Will you plan to release the code of that part?

    Looking forward to your reply, thank you!

    Best.

    opened by FENGGENYU 5
  • Why do you use infinitely long cylinders and cones?

    Why do you use infinitely long cylinders and cones?

    Hi, thanks for your excellent work! But I'm confused that why you use infinitely long cylinders and cones when computing the sdf. Why there is no "height" parameter associated with these two types of primitives? I think they need to be bounded to form the final shape. Could you please explain this for me? Thank you in advance!

    opened by bluestyle97 4
  • Release pretrained weights

    Release pretrained weights

    Hi,

    This is an interesting work and thanks for sharing your code!

    I just want to know when will you share the pre-trained weights? I'd like to try it in my project!

    Thank you!

    opened by Christine0924 4
  • Question about complement layer

    Question about complement layer

    Hi, daxuan I noticed that in model.py, the complement layer was not written after connection decoder. Is it a trick to make the end-to-end method perform better under ShapeNet dataset or just an ignorance? I'm asking this for the reason that my own dataset for mechanical parts have many through-hole structures, which is not topologically isomorphic to those in ShapeNet, and I'm trying to figure out why the training process always ignored the hole structure. Till now, I've excluded the reason of sampling strategy and loss function. Do you think it will help if the complement layer is added?

    opened by Farewell-ME 3
  • question about avg_test_loss_recon

    question about avg_test_loss_recon

    Hi daxuan, Why is the value avg_test_loss_recon divided twice by test_iter when printed to console as "loss_recon"? It occurs both in train.py and eval.py. Best regards

    opened by Farewell-ME 2
  • Sharing models with the Hugging Face Hub

    Sharing models with the Hugging Face Hub

    Hi there!

    CSGStumpNet seems interesting! I see you currently save your model checkpoint in this repo. Would you be interested in sharing the model in the Hugging Face Hub?

    The Hub offers free hosting of over 20K models, and it would make your work more accessible and visible to the rest of the ML community. Some of the benefits of sharing your models would be:

    • versioning
    • commit history and diffs
    • repos provide useful metadata about their tasks, languages, metrics, etc
    • we could add a widget for users to try the model directly in the browser

    Creating the repos and adding new models should be a relatively straightforward process if you've used Git before. This is a step-by-step guide explaining the process in case you're interested. Please let us know if you would be interested and if you have any questions.

    Happy to hear your thoughts, Omar and the Hugging Face team

    opened by osanseviero 1
  • Question about Primitive Loss

    Question about Primitive Loss

    Hi, thank you for this interesting work and congrats on publishing at ICCV.

    I have a question about the implementation of the primitive loss term. In the paper, it is

    But in the implementation it is: https://github.com/kimren227/CSGStumpNet/blob/e23c6fbd20097000b886221abd70def34ac2a15b/loss.py#L11

    Which seems like it is this expression instead:

    I suppose this is equivalent to the first equation when all testing points are outside the primitive, but once there are testing points inside this loss term seems to pull the surface so that no testing points are too far inside (since it pushes the most negative value toward 0). Is this intentional?

    Might the correct implementation be as follows?

    primitive_loss = torch.mean((primitive_sdf**2).min(dim=1)[0]) * self.scale
    
    opened by haoala 10
Owner
Daxuan
PhD Student at Nanyang Technological University
Daxuan
Implementation of "Deep Implicit Templates for 3D Shape Representation"

Deep Implicit Templates for 3D Shape Representation Zerong Zheng, Tao Yu, Qionghai Dai, Yebin Liu. arXiv 2020. This repository is an implementation fo

Zerong Zheng 144 Dec 7, 2022
Code in conjunction with the publication 'Contrastive Representation Learning for Hand Shape Estimation'

HanCo Dataset & Contrastive Representation Learning for Hand Shape Estimation Code in conjunction with the publication: Contrastive Representation Lea

Computer Vision Group, Albert-Ludwigs-Universität Freiburg 38 Dec 13, 2022
A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021)

A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021) This repository contains the official implemen

null 81 Dec 14, 2022
Learning Continuous Signed Distance Functions for Shape Representation

DeepSDF This is an implementation of the CVPR '19 paper "DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation" by Park et a

Meta Research 1.1k Jan 1, 2023
The official code for paper "R2D2: Recursive Transformer based on Differentiable Tree for Interpretable Hierarchical Language Modeling".

R2D2 This is the official code for paper titled "R2D2: Recursive Transformer based on Differentiable Tree for Interpretable Hierarchical Language Mode

Alipay 49 Dec 17, 2022
[3DV 2020] PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction

PeeledHuman: Robust Shape Representation for Textured 3D Human Body Reconstruction International Conference on 3D Vision, 2020 Sai Sagar Jinka1, Rohan

Rohan Chacko 39 Oct 12, 2022
It's like Shape Editor in Maya but works with skeletons (transforms).

Skeleposer What is Skeleposer? Briefly, it's like Shape Editor in Maya, but works with transforms and joints. It can be used to make complex facial ri

Alexander Zagoruyko 1 Nov 11, 2022
ICLR 2021: Pre-Training for Context Representation in Conversational Semantic Parsing

SCoRe: Pre-Training for Context Representation in Conversational Semantic Parsing This repository contains code for the ICLR 2021 paper "SCoRE: Pre-Tr

Microsoft 28 Oct 2, 2022
PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation

PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation The paper: https://arxiv.org/abs/1704.03296 What makes

Jacob Gildenblat 322 Dec 17, 2022
Official Pytorch implementation of "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video", CVPR 2021

TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video Qualtitative result Paper teaser video Introduction This r

Hongsuk Choi 215 Jan 6, 2023
Official pytorch implementation of DeformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces

DeformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces Minhyuk Sung*, Zhenyu Jiang*, Panos Achlioptas, Niloy J. Mitra, Leonidas

Zhenyu Jiang 21 Aug 30, 2022
Towards Interpretable Deep Metric Learning with Structural Matching

DIML Created by Wenliang Zhao*, Yongming Rao*, Ziyi Wang, Jiwen Lu, Jie Zhou This repository contains PyTorch implementation for paper Towards Interpr

Wenliang Zhao 75 Nov 11, 2022
Official repository of the paper Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision

Official repository of the paper Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision

Soubhik Sanyal 689 Dec 25, 2022
The official PyTorch implementation of the paper: *Xili Dai, Xiaojun Yuan, Haigang Gong, Yi Ma. "Fully Convolutional Line Parsing." *.

F-Clip — Fully Convolutional Line Parsing This repository contains the official PyTorch implementation of the paper: *Xili Dai, Xiaojun Yuan, Haigang

Xili Dai 115 Dec 28, 2022
This is an official implementation for the WTW Dataset in "Parsing Table Structures in the Wild " on table detection and table structure recognition.

WTW-Dataset This is an official implementation for the WTW Dataset in "Parsing Table Structures in the Wild " on ICCV 2021. Here, you can download the

null 109 Dec 29, 2022
Code for CVPR2021 paper 'Where and What? Examining Interpretable Disentangled Representations'.

PS-SC GAN This repository contains the main code for training a PS-SC GAN (a GAN implemented with the Perceptual Simplicity and Spatial Constriction c

Xinqi/Steven Zhu 40 Dec 16, 2022
Annotated, understandable, and visually interpretable PyTorch implementations of: VAE, BIRVAE, NSGAN, MMGAN, WGAN, WGANGP, LSGAN, DRAGAN, BEGAN, RaGAN, InfoGAN, fGAN, FisherGAN

Overview PyTorch 0.4.1 | Python 3.6.5 Annotated implementations with comparative introductions for minimax, non-saturating, wasserstein, wasserstein g

Shayne O'Brien 471 Dec 16, 2022
Data and Code for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning"

Introduction Code and data for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning". We cons

Pan Lu 81 Dec 27, 2022
CAPRI: Context-Aware Interpretable Point-of-Interest Recommendation Framework

CAPRI: Context-Aware Interpretable Point-of-Interest Recommendation Framework This repository contains a framework for Recommender Systems (RecSys), a

RecSys Lab 8 Jul 3, 2022