PyTorch implementation of SQN based on CloserLook3D's encoder

Overview

SQN_pytorch

This repo is an implementation of Semantic Query Network (SQN) using CloserLook3D's encoder in Pytorch. For TensorFlow implementation, check our SQN_tensorflow repo.

Caution: currently, this repo does not achieve a satisfactory result as the SQN paper reports. For performance details, check performance section.

The repo is still under development, with the aim of reaching the level of performance reported in the SQN paper.(Note: our SQN_tensorflow repo has slightly higher performance than this pytorch repo.)

Please open an issue, if you have any comments and suggestions for improving the model performance.

TODOs

  • implement the training strategy mentioned in the Appendix of the paper.
  • ablation study
  • benchmark weak supervision

Install python packages

The latest codes are tested on two Ubuntu settings:

  • Ubuntu 18.04, Nvidia 1080, CUDA 10.1, PyTorch 1.4 and Python 3.6
  • Ubuntu 18.04, Nvidia 3090, CUDA 11.3, PyTorch 1.4 and Python 3.6

For details setting up the development environment, check CloserLook3D Pytorch version. To facilitate settings, below I also provide my own bash script( install.sh ) to create a conda environment from scratch for this repo. (You may need tailor this script according to your own system)

#!/bin/bash
ENV_NAME='closerlook'
conda create –n $ENV_NAME python=3.6.10 -y
source activate $ENV_NAME
conda install -c anaconda pillow=6.2 -y
conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.1 -c pytorch -y
conda install -c conda-forge opencv -y
pip3 install termcolor tensorboard h5py easydict

Datasets

Take S3DIS as an example.

Scene Segmentation on S3DIS

You can download the S3DIS dataset from here (4.8 GB). You only need to download the file named Stanford3dDataset_v1.2.zip, unzip and move (or link) it to data/S3DIS/Stanford3dDataset_v1.2. (same as the CloserLook3D repo setting.)

The file structure should look like:

<root>
├── cfgs
│   └── s3dis
├── data
│   └── S3DIS
│       └── Stanford3dDataset_v1.2
│           ├── Area_1
│           ├── Area_2
│           ├── Area_3
│           ├── Area_4
│           ├── Area_5
│           └── Area_6
├── init.sh
├── datasets
├── function
├── models
├── ops
└── utils

run prepare-s3dis-sqn.sh to preprocess the S3DIS dataset. This script will generate a processed folder with the below structure with five types of data, including: raw, sub-sampled point clouds for each area, KDtrees for each sub-sampled area, projection indices for each raw point over the sub-sampled area and weak labels for raw and sub-sampled point clouds (involving different weak proportion of the dataset, e.g., 0.1, 0.01, 0.001, etc.. Details check datasets/S3DIS_sqn.py and my summary notes in this file.

The processed folder is organized as follows:

<root>
├── data
│   └── S3DIS
│       └── Stanford3dDataset_v1.2
│           ├── Area_1
│           ├── Area_2
│           ├── Area_3
│           ├── Area_4
│           ├── Area_5
│           ├── Area_6
│           └── processed
│             ├── weak_label_0.01
│             ├── weak_label_1.0
│             ├── Area_1_0.040_sub.pkl
│             ├── Area_1.pkl
│             ├── ...(many other pkl files)

Compile custom CUDA operators

sh init.sh

Run

use the run-sqn.sh script for training or evaluation.

The core training script is as follows:

python -m torch.distributed.launch \
--master_port 1234567 \
--nproc_per_node ${num_gpu} \
function/train_s3dis_dist_sqn.py \
--dataset_name ${dataset_name} \
--cfg cfgs/${dataset_name}/pospool_xyz_avg_sqn.yaml \
--num_points ${num_points} \
--batch_size ${batch_size} \
--val_freq 20 \
--weak_ratio ${weak_ratio}

The core evaluation script is as follows:

python -m torch.distributed.launch \
--master_port 12346 \
--nproc_per_node 1 \
function/evaluate_s3dis_dist_sqn.py \
--cfg cfgs/s3dis/pospool_xyz_avg_sqn.yaml \
--load_path <checkpoint>
[--log_dir <log directory>]

Performance on S3DIS

The experiments are still in progress due to my slow GPU.

Model Weak ratio Performance (mIoU, %) Description
Official RandLA-Net 100% 63.0 Fully supervised method trained with full labels.
Official SQN 1/1000 61.4 This official SQN uses additional techniques to improve the performance, our replicaed SQN currently does not investigate this yet. Official SQN does not provide results of S3DIS under the weak ratio of 1/10 and 1/100
Our replicated SQN 1/10 51.4 Use PosPool (s) as the encoder whose width=36, due to limited GPU usage and active learning is currently not used.
Our replicated SQN 1/100 25.22 Use PosPool (s) as the encoder whose width=36, due to limited GPU usage and active learning is currently not used.
Our replicated SQN 1/1000 21.10 Use PosPool (s) as the encoder whose width=36, due to limited GPU usage and active learning is currently not used.

Acknowledgements

Our pytorch codes borrowed a lot from CloserLook3D and the custom trilinear interoplation CUDA ops are modified from erikwijmans's Pointnet2_PyTorch.

Citation

If you find our work useful in your research, please consider citing:

@article{pytorchpointnet++,
    Author = {YIN, Chao},
    Title = {SQN Pytorch implementation based on CloserLook3D's encoder},
    Journal = {https://github.com/PointCloudYC/SQN_pytorch},
    Year = {2021}
   }

@article{hu2021sqn,
    title={SQN: Weakly-Supervised Semantic Segmentation of Large-Scale 3D Point Clouds with 1000x Fewer Labels},
    author={Hu, Qingyong and Yang, Bo and Fang, Guangchi and Guo, Yulan and Leonardis, Ales and Trigoni, Niki and Markham, Andrew},
    journal={arXiv preprint arXiv:2104.04891},
    year={2021}
  }
You might also like...
[ICCV 2021] Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation
[ICCV 2021] Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation

MAED: Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation Getting Started Our codes are implemented and tested with pyth

DeepLabv3+:Encoder-Decoder with Atrous Separable Convolution语义分割模型在tensorflow2当中的实现

DeepLabv3+:Encoder-Decoder with Atrous Separable Convolution语义分割模型在tensorflow2当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Download

Code for the paper
Code for the paper "Adversarial Generator-Encoder Networks"

This repository contains code for the paper "Adversarial Generator-Encoder Networks" (AAAI'18) by Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky. Pr

Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations

Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations Code repo for paper Trans-Encoder: Unsupervised sentence-pa

This repository contains the data and code for the paper
This repository contains the data and code for the paper "Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors" (SPNLP@ACL2022)

GP-VAE This repository provides datasets and code for preprocessing, training and testing models for the paper: Diverse Text Generation via Variationa

PyTorch implementation of Algorithm 1 of "On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models"

Code for On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models This repository will reproduce the main results from our pape

RETRO-pytorch - Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
RETRO-pytorch - Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch

RETRO - Pytorch (wip) Implementation of RETRO, Deepmind's Retrieval based Attent

ALBERT-pytorch-implementation - ALBERT pytorch implementation

ALBERT-pytorch-implementation developing... 모델의 개념이해를 돕기 위한 구현물로 현재 변수명을 상세히 적었고

A general framework for deep learning experiments under PyTorch based on pytorch-lightning

torchx Torchx is a general framework for deep learning experiments under PyTorch based on pytorch-lightning. TODO list gan-like training wrapper text

Owner
PointCloudYC
Ph.D candidate at HKUST, focus on point cloud processing and deep learning.
PointCloudYC
PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.

VAENAR-TTS - PyTorch Implementation PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.

Keon Lee 67 Nov 14, 2022
Official Implementation for "ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement" https://arxiv.org/abs/2104.02699

ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement Recently, the power of unconditional image synthesis has significantly advanced th

null 967 Jan 4, 2023
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

Phil Wang 12.6k Jan 9, 2023
Torch-ngp - A pytorch implementation of the hash encoder proposed in instant-ngp

HashGrid Encoder (WIP) A pytorch implementation of the HashGrid Encoder from ins

hawkey 1k Jan 1, 2023
Official implementation for Likelihood Regret: An Out-of-Distribution Detection Score For Variational Auto-encoder at NeurIPS 2020

Likelihood-Regret Official implementation of Likelihood Regret: An Out-of-Distribution Detection Score For Variational Auto-encoder at NeurIPS 2020. T

Xavier 33 Oct 12, 2022
An implementation of a sequence to sequence neural network using an encoder-decoder

Keras implementation of a sequence to sequence model for time series prediction using an encoder-decoder architecture. I created this post to share a

Luke Tonin 195 Dec 17, 2022
QT Py Media Knob using rotary encoder & neopixel ring

QTPy-Knob QT Py USB Media Knob using rotary encoder & neopixel ring The QTPy-Knob features: Media knob for volume up/down/mute with "qtpy-knob.py" Cir

Tod E. Kurt 56 Dec 30, 2022
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021)

Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021) Citation Please cite as: @inproceedings{liu2020understan

Sunbow Liu 22 Nov 25, 2022
《LXMERT: Learning Cross-Modality Encoder Representations from Transformers》(EMNLP 2020)

The Most Important Thing. Our code is developed based on: LXMERT: Learning Cross-Modality Encoder Representations from Transformers

null 53 Dec 16, 2022
A Joint Video and Image Encoder for End-to-End Retrieval

Frozen️ in Time ❄️ ️️️️ ⏳ A Joint Video and Image Encoder for End-to-End Retrieval project page | arXiv | webvid-data Repository containing the code,

null 225 Dec 25, 2022