Hierarchical Aggregation for 3D Instance Segmentation (ICCV 2021)

Related tags

Deep Learning HAIS
Overview

HAIS

PWC PWC

Hierarchical Aggregation for 3D Instance Segmentation (ICCV 2021)

by Shaoyu Chen, Jiemin Fang, Qian Zhang, Wenyu Liu, Xinggang Wang*. (*) Corresponding author. [arXiv]


Introduction

  • HAIS is an efficient and concise bottom-up framework (NMS-free and single-forward) for point cloud instance segmentation. It adopts the hierarchical aggregation (point aggregation and set aggregation) to generate instances and the intra-instance prediction for outlier filtering and mask quality scoring.

Framework

Learderboard

  • High speed. Thanks to the NMS-free and single-forward inference design, HAIS achieves the best inference speed among all existing methods. HAIS only takes 206 ms on RTX 3090 and 339 ms on TITAN X.
Method Per-frame latency on TITAN X
ASIS 181913 ms
SGPN 158439 ms
3D-SIS 124490 ms
GSPN 12702 ms
3D-BoNet 9202 ms
GICN 8615 ms
OccuSeg 1904 ms
PointGroup 452 ms
HAIS 339 ms

[ICCV21 presentation]

Update

2021.9.30:

  • Code is released.
  • With better CUDA optimization, HAIS now only takes 339 ms on TITAN X, much better than the latency reported in the paper (410 ms on TITAN X).

Installation

1) Environment

  • Python 3.x
  • Pytorch 1.1 or higher
  • CUDA 9.2 or higher
  • gcc-5.4 or higher

Create a conda virtual environment and activate it.

conda create -n hais python=3.7
conda activate hais

2) Clone the repository.

git clone https://github.com/hustvl/HAIS.git --recursive

3) Install the requirements.

cd HAIS
pip install -r requirements.txt
conda install -c bioconda google-sparsehash 

4) Install spconv

  • Verify the version of spconv.

    spconv 1.0, compatible with CUDA < 11 and pytorch < 1.5, is already recursively cloned in HAIS/lib/spconv in step 2) by default.

    For higher version CUDA and pytorch, spconv 1.2 is suggested. Replace HAIS/lib/spconv with this fork of spconv.

git clone https://github.com/outsidercsy/spconv.git --recursive
  Note:  In the provided spconv 1.0 and 1.2, spconv\spconv\functional.py is modified to make grad_output contiguous. Make sure you use the modified spconv but not the original one. Or there would be some bugs of optimization.
  • Install the dependent libraries.
conda install libboost
conda install -c daleydeng gcc-5 # (optional, install gcc-5.4 in conda env)
  • Compile the spconv library.
cd HAIS/lib/spconv
python setup.py bdist_wheel
  • Intall the generated .whl file.
cd HAIS/lib/spconv/dist
pip install {wheel_file_name}.whl

5) Compile the external C++ and CUDA ops.

cd HAIS/lib/hais_ops
export CPLUS_INCLUDE_PATH={conda_env_path}/hais/include:$CPLUS_INCLUDE_PATH
python setup.py build_ext develop

{conda_env_path} is the location of the created conda environment, e.g., /anaconda3/envs.

Data Preparation

1) Download the ScanNet v2 dataset.

2) Put the data in the corresponding folders.

  • Copy the files [scene_id]_vh_clean_2.ply, [scene_id]_vh_clean_2.labels.ply, [scene_id]_vh_clean_2.0.010000.segs.json and [scene_id].aggregation.json into the dataset/scannetv2/train and dataset/scannetv2/val folders according to the ScanNet v2 train/val split.

  • Copy the files [scene_id]_vh_clean_2.ply into the dataset/scannetv2/test folder according to the ScanNet v2 test split.

  • Put the file scannetv2-labels.combined.tsv in the dataset/scannetv2 folder.

The dataset files are organized as follows.

HAIS
├── dataset
│   ├── scannetv2
│   │   ├── train
│   │   │   ├── [scene_id]_vh_clean_2.ply & [scene_id]_vh_clean_2.labels.ply & [scene_id]_vh_clean_2.0.010000.segs.json & [scene_id].aggregation.json
│   │   ├── val
│   │   │   ├── [scene_id]_vh_clean_2.ply & [scene_id]_vh_clean_2.labels.ply & [scene_id]_vh_clean_2.0.010000.segs.json & [scene_id].aggregation.json
│   │   ├── test
│   │   │   ├── [scene_id]_vh_clean_2.ply 
│   │   ├── scannetv2-labels.combined.tsv

3) Generate input files [scene_id]_inst_nostuff.pth for instance segmentation.

cd HAIS/dataset/scannetv2
python prepare_data_inst.py --data_split train
python prepare_data_inst.py --data_split val
python prepare_data_inst.py --data_split test

Training

CUDA_VISIBLE_DEVICES=0 python train.py --config config/hais_run1_scannet.yaml 

Inference

1) To evaluate on validation set,

  • prepare the .txt instance ground-truth files as the following.
cd dataset/scannetv2
python prepare_data_inst_gttxt.py
  • set split and eval in the config file as val and True.

  • Run the inference and evaluation code.

CUDA_VISIBLE_DEVICES=0 python test.py --config config/hais_run1_scannet.yaml --pretrain $PATH_TO_PRETRAIN_MODEL$

Pretrained model: Google Drive / Baidu Cloud [code: sh4t]. mAP/mAP50/mAP25 is 44.1/64.4/75.7.

2) To evaluate on test set,

  • Set (split, eval, save_instance) as (test, False, True).
  • Run the inference code. Prediction results are saved in HAIS/exp by default.
CUDA_VISIBLE_DEVICES=0 python test.py --config config/hais_run1_scannet.yaml --pretrain $PATH_TO_PRETRAIN_MODEL$

Visualization

We provide visualization tools based on Open3D (tested on Open3D 0.8.0).

pip install open3D==0.8.0
python visualize_open3d.py --data_path {} --prediction_path {} --data_split {} --room_name {} --task {}

Please refer to visualize_open3d.py for more details.

Acknowledgement

The code is based on PointGroup and spconv.

Contact

If you have any questions or suggestions about this repo, please feel free to contact me ([email protected]).

Citation

@InProceedings{Chen_2021_ICCV,
    author    = {Chen, Shaoyu and Fang, Jiemin and Zhang, Qian and Liu, Wenyu and Wang, Xinggang},
    title     = {Hierarchical Aggregation for 3D Instance Segmentation},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {15467-15476}
}
Comments
  • Tuning config for large-scale point clouds

    Tuning config for large-scale point clouds

    Hi! I am testing HAIS performance on my custom data. For small point clouds it works OK, but for large ones (20 million points) it throws out an error:

    RuntimeError: /HAIS/lib/spconv/src/spconv/spconv_ops.cc 40
    batchSize * outputVolume < std::numeric_limits::max() assert faild. due to limits of cuda hash, the volume of dense space include batch size must less than std::numeric_limits::max() = 2e9
    

    I guess I should reduce the batch size and other parameters accordingly. Could you please tell what are scale, full_scale, max_npoint parameters in the config file?

      input_channel: 3
      scale: 50   # voxel_size = 1 / scale, scale 50 -> voxel_size 0.02m
      batch_size: 4
      full_scale: [128, 512]
      max_npoint: 250000
      mode: 4 # 4=mean
    

    As far as I can see, full_scale is used in clusters_voxelization() – why is it [128, 512]?

    opened by Ritchizh 9
  • [Discuss on consistency] Custom data on Realsense L515+ORB-SLAM3+Open3D Reconstruction

    [Discuss on consistency] Custom data on Realsense L515+ORB-SLAM3+Open3D Reconstruction

    Hi Shaoyun

    Thanks for your work. I'm interested in how HAIS can help the SLAM system extract persistent semantic landmarks.

    I have tested HAIS on my own dataset,

    • Hardware: Intel Realsense L515 RGB-D camera
    • Localization: ORB-SLAM3 with purely RGB-D input
    • Reconstruction: Open3D RGB-D integration, which is based on TSDF and extract point cloud after the entire scan is finished.

    Here are some results (colored by raw RGB and semantic segmentation), a. Living Room living_mesh

    living

    b. dinning room dinning_mesh

    dinning

    c. study room studyroom_mesh

    studyroom

    The quantitative results show there is quite many over-segmentation. In the ScanNet test results, I can also see a few of the over-segmentation problems. But it is way less frequent than in my customized dataset. Moreover, over-segmentation in ScanNet dataset normally occurs in those poorer reconstructed sub-volumes, while it occurs in well-reconstructed spaces in the customized data, such as the Living Room scan.

    So, what is actually affecting the segmentation performance in the self-collected dataset?

    • Previous issues #19 also discuss the issues. And some says the class_numpoint_mean_dict should be modified. But I'm running in indoor scenes similar to ScanNet. mean_numpoint and mean_radius should be only slightly different. That dictionary parameter should not affect the consistency across datasets in my case.
    • How is the domain shift of HAIS? What kind of issues can affects the consistency across different dataset?

    Thanks Chuhao

    opened by glennliu 7
  • S3DIS usage

    S3DIS usage

    Hello, I see that your code looks simular to SoftGroup (they probably copied from you, but let's put that aside). I want to test HAIS on the S3DIS dataset but I see no readme file that shows me how to do so. I could try to modify it myself but I see that you also have benchmark results on S3DIS so it would be nice to get the files to make it work (and how). Finally I am curious about the visualisation for S3DIS, do you have something for this as well? I look forward to hearing from you! Kind Regards, Goose.

    opened by GooseFather990 5
  • About mAP &AP 50

    About mAP &AP 50

    Thank you for your contribution, this is a great job. First of all, there is no offense, whether the test set is simpler than the verification set? because test set AP50 is 69.9, and the verification set AP50 is 64.4. I'm currently studying the semantics and instance segmentation of ScannetV2, but I can only upload the results every two weeks. So I want to know if I missed something which can improve the accuracy on the test set. Thank you.

    opened by weiguangzhao 5
  • The directory HAIS/lib/spconv does not exists in the repo.

    The directory HAIS/lib/spconv does not exists in the repo.

    Hi, thanks for uploading the code. I am following your guideline to install spconv. At this step, The directory HAIS/lib/spconv` in the directory does not exist. Is the spconv version used in HAIS the same as in PointGroup (spconv @ 740a5b7)

    cd HAIS/lib/spconv python setup.py bdist_wheel

    opened by anhtuanhsgs 3
  • gcc-5 Install error

    gcc-5 Install error

    Hi. Thank you for great work. I'm using colab to run HAIS.(I installed condacolab in colab)

    When I run this command, conda install -c daleydeng gcc-5

    I got the this error.

    The following NEW packages will be INSTALLED:

    gcc-5 daleydeng/linux-64::gcc-5-5.4.0-2 gmp conda-forge/linux-64::gmp-6.2.1-h58526e2_0 isl daleydeng/linux-64::isl-0.17.1-0 mpc conda-forge/linux-64::mpc-1.2.1-h9f54685_0 mpfr conda-forge/linux-64::mpfr-4.1.0-h9202a9a_1

    The following packages will be SUPERSEDED by a higher-priority channel:

    certifi anaconda::certifi-2021.10.8-py37h06a4~ --> conda-forge::certifi-2021.10.8-py37h89c1867_2 conda anaconda::conda-4.12.0-py37h06a4308_0 --> conda-forge::conda-4.12.0-py37h89c1867_0

    Preparing transaction: done Verifying transaction: done Executing transaction: done ERROR conda.core.link:_execute(732): An error occurred while installing package 'daleydeng::gcc-5-5.4.0-2'. Rolling back transaction: done

    LinkError: post-link script failed for package daleydeng::gcc-5-5.4.0-2 location of failed script: /usr/local/bin/.gcc-5-post-link.sh ==> script messages <== ==> script output <== stdout: Installation failed: gcc is not able to compile a simple 'Hello, World' program.

    stderr: ln: failed to create symbolic link '/usr/local/lib/gcc/x86_64-unknown-linux-gnu/5.4.0/crt1.o': File exists ln: failed to create symbolic link '/usr/local/lib/gcc/x86_64-unknown-linux-gnu/5.4.0/crti.o': File exists ln: failed to create symbolic link '/usr/local/lib/gcc/x86_64-unknown-linux-gnu/5.4.0/crtn.o': File exists /usr/local/bin/../gcc/libexec/gcc/x86_64-unknown-linux-gnu/5.4.0/cc1: error while loading shared libraries: libmpfr.so.4: cannot open shared object file: No such file or directory return code: 1

    How can I solve this? Any help would be appreciated.

    opened by sobikim 2
  • sub training set

    sub training set

    Hi, thx for your great work! I'm trying to use your code and I don't have that much time for training. I'd like to make a smaller sub training & validation set for algorithm checking. Do you have any suggestions or did you use any sub sets for quick checking?

    Here is what I did: I randomly choose 300 scans out of 1201 scans, zoomed out training epochs(500 -> 125) and other epochs like preparing epoch(100 -> 25) and cal_iou_based_on_mask_start_epoch(200 -> 50). I still used the 312 val set for validation. The val result is absolutely very poor. AP/AP50/AP25 is like: 0.005/0.014/0.043. I think there should be a better setting about sub set choosing and config parameters setting for a quick check. Or you need several days of training for checking a new modification.

    opened by argosdh 2
  • fix: initialize allocated cuda memory to 0

    fix: initialize allocated cuda memory to 0

    The allocated CUDA memory was not initialized but was assumed to be all 0, which may cause CUDA kernel failed: an illegal memory access was encountered. cudaMemset code was added to fix this problem.

    opened by eamonn-zh 1
  • Can we use it in points only?

    Can we use it in points only?

    I have points cloud without rgb information. I want to segment it, can I use this method for point cloud segmentation. If yes, would you give some advice or suggestions, where shall I modify your code? Thanks

    opened by AI-Hunter 1
  • config parameters meaning

    config parameters meaning

    Hi! Could you please clarify the meaning of the following parameters in config:

    1. point_aggr_radius: 0.03 -- this must be r_point bandwidth mentioned in the Paper. How does it work? Is it correct that: For every pair of voxels, 2cm in size, having same semantic label, if their spatial distance is smaller than 3 cm, they are united into one set?
    2. What is cluster_shift_meanActive ?
    3. Why is score_scale different from data scale? What are score_scale: 50 and score_fullscale: 20?
    4. In hierarchical_aggregation, class_numpoint_mean_dict and class_radius_mean - do you intentionally set [-1] value for floor and wall classes to exclude them from instances prediction?
    opened by Ritchizh 0
  • Fail to load .ply file?(sorry!)

    Fail to load .ply file?(sorry!)

    My pytorch version 1.10 Why it is fail to load .ply . it gives the following error:

    ................
      File "/anaconda3/envs/hais/lib/python3.7/site-packages/torch/serialization.py", line 608, in load
        return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
      File "/anaconda3/envs/hais/lib/python3.7/site-packages/torch/serialization.py", line 777, in _legacy_load
        magic_number = pickle_module.load(f, **pickle_load_args)
    _pickle.UnpicklingError: unpickling stack underflow
    

    Is there any solution?

    opened by AI-Hunter 0
  • Inference on ScanNet v2 RuntimeError: CUDA error: invalid argument

    Inference on ScanNet v2 RuntimeError: CUDA error: invalid argument

    Hi, thanks for your excellent work. I tried to reproduce the result of the pretrained model on the validation set of ScanNet v2.0. I followed the instructions in your repo to process the data and installation, but I faced this issue: image

    Does anyone have any suggestions about the root of this issue? Thank you so much.

    My environment:

    • CUDA 10.2.
    • spconv 1.0
    • pytorch 1.1.0
    • python 3.7
    opened by hazelAutumn 0
  • How to inference in the test set of stpls3d dataset?

    How to inference in the test set of stpls3d dataset?

    How to infer in the test set of stpls3d? In other words, is there an inference script for unlabeled data in stpls3d (i.e. 26_points_gtv3.txt, 27_points_gtv3.txt, 28_points_gtv3.txt)?

    opened by YellowPuppy 0
  • Assertion `primary_num <= MAX_PRIMARY_NUM' failed on ScanNet V2 val

    Assertion `primary_num <= MAX_PRIMARY_NUM' failed on ScanNet V2 val

    Hi there,

    I encountered an AssertionError when running evaluation after instance iter. 272/312: hierarchical_aggregation.cu:150: void hierarchical_aggregation_cuda(int, int, int*, int*, float*, int, int, int*, int*, float*, int*, int*): Assertion 'primary_num <= MAX_PRIMARY_NUM' failed.

    Is there a way to handle this?

    Regards

    opened by leimeng86 1
  • Test the pretrained model with own point cloud data

    Test the pretrained model with own point cloud data

    Hi! I tried to use my own test ply data(All renamed as the shown format) uploaded to the directory dataset/scannetv2/test to run the test with pretrained model. And I met some bugs when I run test.py. More information is listed below:

    CUDA_VISIBLE_DEVICES=0 python test.py --config config/hais_run1_scannet.yaml --pretrain ~/HAIS/HAIS/pretrain/hais_ckpt.pth /home/harris_huang/HAIS/HAIS/util/config.py:22: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. config = yaml.load(f) [2022-07-08 13:12:37,754 INFO log.py line 39 8284] ************************ Start Logging ************************ [2022-07-08 13:12:37,768 INFO test.py line 27 8284] Namespace(TEST_NMS_THRESH=0.3, TEST_NPOINT_THRESH=100, TEST_SCORE_THRESH=0.09, batch_size=4, bg_thresh=0.0, block_reps=2, block_residual=True, cal_iou_based_on_mask=True, cal_iou_based_on_mask_start_epoch=200, classes=20, cluster_shift_meanActive=300, config='config/hais_run1_scannet.yaml', data_root='dataset', dataset='scannetv2', dataset_dir='data/scannetv2_inst.py', dist=False, epochs=500, eval=False, exp_path='exp/scannetv2/hais/hais_run1_scannet', fg_thresh=1.0, filename_suffix='_inst_nostuff.pth', fix_module=[], full_scale=[128, 512], ignore_label=-100, input_channel=3, local_rank=0, loss_weight=[1.0, 1.0, 1.0, 1.0], lr=0.001, manual_seed=123, mask_filter_score_feature_thre=0.5, max_npoint=250000, max_proposal_num=200, mode=4, model_dir='model/hais/hais.py', model_name='hais', momentum=0.9, multiplier=0.5, optim='Adam', point_aggr_radius=0.03, prepare_epochs=100, pretrain='/home/harris_huang/HAIS/HAIS/pretrain/hais_ckpt.pth', pretrain_module=[], pretrain_path=None, save_dir='exp', save_freq=16, save_instance=True, save_pt_offsets=False, save_semantic=False, scale=50, score_fullscale=20, score_mode=4, score_scale=50, split='test', step_epoch=200, task='test', test_epoch=500, test_mask_score_thre=-0.5, test_seed=567, test_workers=16, train_workers=8, use_coords=True, use_mask_filter_score_feature=True, use_mask_filter_score_feature_start_epoch=200, using_NMS=False, using_set_aggr_in_testing=True, using_set_aggr_in_training=False, weight_decay=0.0001, width=32) [2022-07-08 13:12:37,769 INFO test.py line 242 8284] => creating model ... [2022-07-08 13:12:37,769 INFO test.py line 243 8284] Classes: 20 [2022-07-08 13:12:38,722 INFO test.py line 255 8284] cuda available: True [2022-07-08 13:12:42,480 INFO test.py line 259 8284] #classifier parameters (model): 30837785 [2022-07-08 13:12:42,563 INFO utils.py line 67 8284] Restore from /home/harris_huang/HAIS/HAIS/pretrain/hais_ckpt.pth [2022-07-08 13:12:42,825 INFO test.py line 36 8284] >>>>>>>>>>>>>>>> Start Evaluation >>>>>>>>>>>>>>>> [2022-07-08 13:12:42,882 INFO scannetv2_inst.py line 95 8284] Testing samples (test): 0 Traceback (most recent call last): File "test.py", line 268, in test(model, model_fn, data_name, cfg.test_epoch) File "test.py", line 179, in test logger.info("whole set inference time: {:.2f}s, latency per frame: {:.2f}ms".format(total_end1, total_end1 / len(dataloader) * 1000)) ZeroDivisionError: float division by zero

    It seems like no test data is loaded, how can I solve this issue? thanks!

    opened by harris56 0
  • RuntimeError: cublas runtime error : the GPU program failed to execute at /pytorch/aten/src/THC/THCBlas.cu:259

    RuntimeError: cublas runtime error : the GPU program failed to execute at /pytorch/aten/src/THC/THCBlas.cu:259

    Hi! I met some bug when i run test.py. more infomation are list blow:

    [2022-06-05 20:51:56,158 INFO scannetv2_inst.py line 97 3304] Testing samples (test): 1 Traceback (most recent call last): File "test.py", line 269, in test(model, model_fn, data_name, cfg.test_epoch) File "test.py", line 57, in test preds = model_fn(batch, model, epoch) File "/home/yux/yux/pointCloud/HAIS/model/hais/hais.py", line 371, in test_model_fn ret = model(input_, p2v_map, coords_float, coords[:, 0].int(), batch_offsets, epoch, 'test') File "/home/yux/anaconda3/envs/hais/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/home/yux/yux/pointCloud/HAIS/model/hais/hais.py", line 265, in forward output = self.input_conv(input) File "/home/yux/anaconda3/envs/hais/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/home/yux/anaconda3/envs/hais/lib/python3.7/site-packages/spconv/modules.py", line 123, in forward input = module(input) File "/home/yux/anaconda3/envs/hais/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/home/yux/anaconda3/envs/hais/lib/python3.7/site-packages/spconv/conv.py", line 157, in forward outids.shape[0]) File "/home/yux/anaconda3/envs/hais/lib/python3.7/site-packages/spconv/functional.py", line 83, in forward return ops.indice_conv(features, filters, indice_pairs, indice_pair_num, num_activate_out, False, True) File "/home/yux/anaconda3/envs/hais/lib/python3.7/site-packages/spconv/ops.py", line 112, in indice_conv int(inverse), int(subm)) RuntimeError: cublas runtime error : the GPU program failed to execute at /pytorch/aten/src/THC/THCBlas.cu:259

    pytorch 1.1.0 cuda 10.2 cudnn 8.2.2 spconv 1.0

    @outsidercsy

    opened by yuxgis 0
Owner
Hust Visual Learning Team
Hust Visual Learning Team belongs to the Artificial Intelligence Research Institute in the School of EIC in HUST
Hust Visual Learning Team
A pytorch reproduction of { Co-occurrence Feature Learning from Skeleton Data for Action Recognition and Detection with Hierarchical Aggregation }.

A PyTorch Reproduction of HCN Co-occurrence Feature Learning from Skeleton Data for Action Recognition and Detection with Hierarchical Aggregation. Ch

Guyue Hu 210 Dec 31, 2022
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Object Detection and Instance Segmentation.

Swin Transformer for Object Detection This repo contains the supported code and configuration files to reproduce object detection results of Swin Tran

Swin Transformer 1.4k Dec 30, 2022
Codes of paper "Unseen Object Amodal Instance Segmentation via Hierarchical Occlusion Modeling"

Unseen Object Amodal Instance Segmentation (UOAIS) Seunghyeok Back, Joosoon Lee, Taewon Kim, Sangjun Noh, Raeyoung Kang, Seongho Bak, Kyoobin Lee This

GIST-AILAB 92 Dec 13, 2022
PrimitiveNet: Primitive Instance Segmentation with Local Primitive Embedding under Adversarial Metric (ICCV 2021)

PrimitiveNet Source code for the paper: Jingwei Huang, Yanfeng Zhang, Mingwei Sun. [PrimitiveNet: Primitive Instance Segmentation with Local Primitive

Jingwei Huang 47 Dec 6, 2022
Video Instance Segmentation with a Propose-Reduce Paradigm (ICCV 2021)

Propose-Reduce VIS This repo contains the official implementation for the paper: Video Instance Segmentation with a Propose-Reduce Paradigm Huaijia Li

DV Lab 39 Nov 23, 2022
Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation

Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation This paper has been accepted and early accessed

Yun Liu 39 Sep 20, 2022
LBK 20 Dec 2, 2022
[ArXiv 2021] Data-Efficient Instance Generation from Instance Discrimination

InsGen - Data-Efficient Instance Generation from Instance Discrimination Data-Efficient Instance Generation from Instance Discrimination Ceyuan Yang,

GenForce: May Generative Force Be with You 93 Dec 25, 2022
VIL-100: A New Dataset and A Baseline Model for Video Instance Lane Detection (ICCV 2021)

Preparation Please see dataset/README.md to get more details about our datasets-VIL100 Please see INSTALL.md to install environment and evaluation too

null 82 Dec 15, 2022
Code for 'Single Image 3D Shape Retrieval via Cross-Modal Instance and Category Contrastive Learning', ICCV 2021

CMIC-Retrieval Code for Single Image 3D Shape Retrieval via Cross-Modal Instance and Category Contrastive Learning. ICCV 2021. Introduction In this wo

null 42 Nov 17, 2022
Deep Structured Instance Graph for Distilling Object Detectors (ICCV 2021)

DSIG Deep Structured Instance Graph for Distilling Object Detectors Authors: Yixin Chen, Pengguang Chen, Shu Liu, Liwei Wang, Jiaya Jia. [pdf] [slide]

DV Lab 31 Nov 17, 2022
TorchDistiller - a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

This project is a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

yifan liu 147 Dec 3, 2022
Official repository for the CVPR 2021 paper "Learning Feature Aggregation for Deep 3D Morphable Models"

Deep3DMM Official repository for the CVPR 2021 paper Learning Feature Aggregation for Deep 3D Morphable Models. Requirements This code is tested on Py

null 38 Dec 27, 2022
【ACMMM 2021】DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning

DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning (ACMMM 2021) Overview We release the code of the DSANet (Dynamic S

Wenhao Wu 46 Dec 27, 2022
Official implementation of NeurIPS 2021 paper "Contextual Similarity Aggregation with Self-attention for Visual Re-ranking"

CSA: Contextual Similarity Aggregation with Self-attention for Visual Re-ranking PyTorch training code for CSA (Contextual Similarity Aggregation). We

Hui Wu 19 Oct 21, 2022
Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers [CVPR 2021]

Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers [BCNet, CVPR 2021] This is the official pytorch implementation of BCNet built on

Lei Ke 434 Dec 1, 2022
(IEEE TIP 2021) Regularized Densely-connected Pyramid Network for Salient Instance Segmentation

RDPNet IEEE TIP 2021: Regularized Densely-connected Pyramid Network for Salient Instance Segmentation PyTorch training and testing code are available.

Yu-Huan Wu 41 Oct 21, 2022
Robust Instance Segmentation through Reasoning about Multi-Object Occlusion [CVPR 2021]

Robust Instance Segmentation through Reasoning about Multi-Object Occlusion [CVPR 2021] Abstract Analyzing complex scenes with DNN is a challenging ta

Irene Yuan 24 Jun 27, 2022
This is the code repository for the paper A hierarchical semantic segmentation framework for computer-vision-based bridge column damage detection

Bridge-damage-segmentation This is the code repository for the paper A hierarchical semantic segmentation framework for computer-vision-based bridge c

Jingxiao Liu 5 Dec 7, 2022