PyTorch implementation of paper: HPNet: Deep Primitive Segmentation Using Hybrid Representations.

Related tags

Deep Learning HPNet
Overview

HPNet

This repository contains the PyTorch implementation of paper: HPNet: Deep Primitive Segmentation Using Hybrid Representations.

HPNet Pipeline

Installation

The main experiments are implemented on pytorch 1.7.0, tensorflow 1.15.0. Please install the dependancy packages using pip install -r requirements.txt.

Dataset

ABCParts Dataset

ABCParts Dataset is made by ParseNet. We reorganize the dataset to fit our dataloader and clean some wrong labels. Please download the dataset here(69G) and put it under data/ABC folder.

Usage

To train our model on ABC dataset: run

python train.py --data_path=./path/to/dataset`

To evaluate our model on ABC dataset: run

python train.py --eval --checkpoint_path=./path/to/pretrained/model --val_skip=100

on the subset of test dataset. To test on the full dataset, simply set val_skip=1.

pretrained models

We provide pre-trained model on ABC Dataset here. This should generate the result reported in the paper.

Acknowledgements

We would like to thank and acknowledge referenced codes from

  1. ParseNet: https://github.com/Hippogriff/parsenet-codebase.

  2. DGCNN: https://github.com/WangYueFt/dgcnn.

Citations

If you find this repository useful in your research, please cite:

@article{yan2021hpnet,
  title={HPNet: Deep Primitive Segmentation Using Hybrid Representations},
  author={Yan, Siming and Yang, Zhenpei and Ma, Chongyang and Huang, Haibin and Vouga, Etienne and Huang, Qixing},
  journal={arXiv preprint arXiv:2105.10620},
  year={2021}
}
Comments
  • pretrained model broken

    pretrained model broken

    The provided pretrained model (https://drive.google.com/file/d/1fj84kyD9CGT8j61IW-xSWZ5q4q5IpoYx/view?usp=sharing) seems to be broken, could you please upload a complete one?

    opened by guohaoxiang 8
  • Training issues

    Training issues

    Thanks for the code, and the great work. Hope you can explain some training details.

    1. Whether data augmentation was used during training, the training setting in readme.md is different with supplementary Material.
    2. Are there any setting that have a huge impact on the results? I rebuild the network and trained it for 100 epoches with initial lr 1e-3 and decay step 40, but both $L_{emb}$ and $L_{nnl}$ are worse than the provided pretrained model on the test set. Thanks for answer
    opened by Im-fengyin 3
  • Some questions about the paper.

    Some questions about the paper.

    Hi, thanks for the code, and the great work. But there are a lot of places in the paper that confuse me, hope you can clarify: 1:What means R∈O(K) in Equation1? 2:Ac = Ag_c + E ? 3: What does ei mean in section 4.3? 4: Does Fl mean that each point has L=1+ dc + ds (dc=K)features in the sub_section Weighting sub-module? 5: Equation 6 and Equation 7 are confusing, k<k' is different from reference【1,34】? 6: About Evaluation metrics seg_iou: the paper doesn't seem to mention how to predict membership matrix W.

    opened by zou-longkun 3
  • questions about num of test samples

    questions about num of test samples

    Hi, Siming, thanks for the code and the dataset. I noticed that there are 4000 test samples in your test dataset, but I see that the test set written in your code is to take every 100th test sample as a test sample. I would like to ask whether the results in your paper are based on the full test set or the results of randomly selected partial samples for testing

    opened by Frank-Wang-wow 2
  • Questions about primitive parameter in dataset

    Questions about primitive parameter in dataset

    Hi Siming, I wonder where is the primitive parameters information come in your dataset? I didn't find it in the origin ABC dataset, did you pre-process it from ABC dataset? And I also curious about the accuracy of directly predict primitive parameters from network, is it similar to the gt primitive parameters? Thanks in advance for your reply and wonderful code. It helps me a lot.~

    opened by LuciusPennyworth 2
  • Can not unzip pre-trained models

    Can not unzip pre-trained models

    Hello,

    Thanks for making your code available.

    I downloaded your pre-trained model which is abc_normal.tar from your given link, but I can not unzip that file. Can you check if the file is broken?

    best. Mulin

    opened by MulinYu 2
  • CUDA out of memory

    CUDA out of memory

    Thanks for your great work!

    I'm running into a problem in the training part, it reports CUDA out of memory but there;s no any other program running on the gpu I select, and the memory is obviously sufficient.

    Here are the error message and the infomation of my gpu. I wonder how to solve this and looking forward to your reply!

    屏幕截图 2022-12-12 153444 屏幕截图 2022-12-12 153457

    opened by Tomoki-0526 1
  • MeanShift speed

    MeanShift speed

    Hi, Siming. I tried to eval the network on a server with 4 1080P GPU and 16 cpus with command: CUDA_VISIBLE_DEVICES=1 python train.py --eval --checkpoint_path=abc_normal.tar --val_skip=100 --input_normal=1

    But I found that the mean shift clustering step is rather slow, it takes over 3 minites to compute the clustering of a single model. And the utilization of 16 cpu reaches to 100%. Do you have any suggestions to speed up this step? Thanks a lot.

    opened by guohaoxiang 1
  • ABC dataset id correspondence

    ABC dataset id correspondence

    Hi, Siming. Thanks for providing the dataset, can you tell me where I can find the correpondence between your data id and the id of ABC dataset? That would be very helpful. Thanks.

    opened by guohaoxiang 1
  • Meaning of

    Meaning of "T_param" in the dataset

    Hello,

    Can you explain a little bit what is the meaning of each dimension of "T_param"?

    I know it's the parameters of plane, cone, sphere and cylinder. I think you follow the parameters defined in SPFN, can you describe it more precisely? For example, the first 3 dimensions is 'a' of a plane or the first dimension is 'd' of a plane?

    Thanks in advance.

    Best. Mulin

    opened by MulinYu 1
  • Test results not as good as those in paper

    Test results not as good as those in paper

    Hello Siming,

    I run 'python train.py --eval --checkpoint_path=./path/to/pretrained/model --val_skip=100' to test your code with your provided data and pre-trained model, the results are: 'feat_loss: 0.770812 miou: 0.384534 nnl_loss: 2.341513 param_loss: 0.048104 type_miou: 0.170333' which are far away from the results shown in the paper. And I also visualized several results which are not well segmented. I want to figure out where is the problem since I didn't touch your code and use the default parameters. Can you check the pre-trained model?

    Thanks in advance. Best regard and have a nice day. Best. Mulin

    opened by MulinYu 1
Owner
Siming Yan
CS Ph.D. Student at UT-Austin
Siming Yan
PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

null 943 Jan 7, 2023
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection

Hybrid-Supervised Object Detection System Object detection system trained by hybrid-supervision/weakly semi-supervision (HSOD/WSSOD): This project is

null 5 Dec 10, 2022
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.

TorchRL Disclaimer This library is not officially released yet and is subject to change. The features are available before an official release so that

Meta Research 860 Jan 7, 2023
Code for "Primitive Representation Learning for Scene Text Recognition" (CVPR 2021)

Primitive Representation Learning Network (PREN) This repository contains the code for our paper accepted by CVPR 2021 Primitive Representation Learni

Ruijie Yan 76 Jan 2, 2023
Code for Iso-Points: Optimizing Neural Implicit Surfaces with Hybrid Representations

Implementation for Iso-Points (CVPR 2021) Official code for paper Iso-Points: Optimizing Neural Implicit Surfaces with Hybrid Representations paper |

Yifan Wang 66 Nov 8, 2022
A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.

collie_recs Collie is a library for preparing, training, and evaluating implicit deep learning hybrid recommender systems, named after the Border Coll

ShopRunner 97 Jan 3, 2023
A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.

collie Collie is a library for preparing, training, and evaluating implicit deep learning hybrid recommender systems, named after the Border Collie do

ShopRunner 96 Dec 29, 2022
PyTorch implementation of: Michieli U. and Zanuttigh P., "Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations", CVPR 2021.

Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations This is the official PyTorch implementation

Multimedia Technology and Telecommunication Lab 42 Nov 9, 2022
Self-supervised Multi-modal Hybrid Fusion Network for Brain Tumor Segmentation

JBHI-Pytorch This repository contains a reference implementation of the algorithms described in our paper "Self-supervised Multi-modal Hybrid Fusion N

FeiyiFANG 5 Dec 13, 2021
The official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach

Graph Optimizer This repo contains the official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averagin

Chenyu 109 Dec 23, 2022
Tensorflow 2 implementation of the paper: Learning and Evaluating Representations for Deep One-class Classification published at ICLR 2021

Deep Representation One-class Classification (DROC). This is not an officially supported Google product. Tensorflow 2 implementation of the paper: Lea

Google Research 137 Dec 23, 2022
PyTorch implementation of the NIPS-17 paper "Poincaré Embeddings for Learning Hierarchical Representations"

Poincaré Embeddings for Learning Hierarchical Representations PyTorch implementation of Poincaré Embeddings for Learning Hierarchical Representations

Facebook Research 1.6k Dec 25, 2022
PyTorch implementation for paper StARformer: Transformer with State-Action-Reward Representations.

StARformer This repository contains the PyTorch implementation for our paper titled StARformer: Transformer with State-Action-Reward Representations.

Jinghuan Shang 14 Dec 9, 2022
Code for the SIGIR 2022 paper "Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion"

MKGFormer Code for the SIGIR 2022 paper "Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion" Model Architecture Illu

ZJUNLP 68 Dec 28, 2022
ColossalAI-Examples - Examples of training models with hybrid parallelism using ColossalAI

ColossalAI-Examples This repository contains examples of training models with Co

HPC-AI Tech 185 Jan 9, 2023
The official implementation of the Hybrid Self-Attention NEAT algorithm

PUREPLES - Pure Python Library for ES-HyperNEAT About This is a library of evolutionary algorithms with a focus on neuroevolution, implemented in pure

Adrian Westh 91 Dec 12, 2022
PyTorch implementation of the Deep SLDA method from our CVPRW-2020 paper "Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis"

Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis This is a PyTorch implementation of the Deep Streaming Linear Discriminant

Tyler Hayes 41 Dec 25, 2022
Here is the implementation of our paper S2VC: A Framework for Any-to-Any Voice Conversion with Self-Supervised Pretrained Representations.

S2VC Here is the implementation of our paper S2VC: A Framework for Any-to-Any Voice Conversion with Self-Supervised Pretrained Representations. In thi

null 81 Dec 15, 2022
[ICCV'21] Official implementation for the paper Social NCE: Contrastive Learning of Socially-aware Motion Representations

CrowdNav with Social-NCE This is an official implementation for the paper Social NCE: Contrastive Learning of Socially-aware Motion Representations by

VITA lab at EPFL 125 Dec 23, 2022