[CVPR 2021] Scan2Cap: Context-aware Dense Captioning in RGB-D Scans

Overview

Scan2Cap: Context-aware Dense Captioning in RGB-D Scans

Introduction

We introduce the task of dense captioning in 3D scans from commodity RGB-D sensors. As input, we assume a point cloud of a 3D scene; the expected output is the bounding boxes along with the descriptions for the underlying objects. To address the 3D object detection and description problems, we propose Scan2Cap, an end-to-end trained method, to detect objects in the input scene and describe them in natural language. We use an attention mechanism that generates descriptive tokens while referring to the related components in the local context. To reflect object relations (i.e. relative spatial relations) in the generated captions, we use a message passing graph module to facilitate learning object relation features. Our method can effectively localize and describe 3D objects in scenes from the ScanRefer dataset, outperforming 2D baseline methods by a significant margin (27.61% [email protected] improvement).

Please also check out the project website here.

For additional detail, please see the Scan2Cap paper:
"Scan2Cap: Context-aware Dense Captioning in RGB-D Scans"
by Dave Zhenyu Chen, Ali Gholami, Matthias Nießner and Angel X. Chang
from Technical University of Munich and Simon Fraser University.

Data

ScanRefer

If you would like to access to the ScanRefer dataset, please fill out this form. Once your request is accepted, you will receive an email with the download link.

Note: In addition to language annotations in ScanRefer dataset, you also need to access the original ScanNet dataset. Please refer to the ScanNet Instructions for more details.

Download the dataset by simply executing the wget command:

wget <download_link>

Scan2CAD

As learning the relative object orientations in the relational graph requires CAD model alignment annotations in Scan2CAD, please refer to the Scan2CAD official release (you need ~8MB on your disk). Once the data is downloaded, extract the zip file under data/ and change the path to Scan2CAD annotations (CONF.PATH.SCAN2CAD) in lib/config.py . As Scan2CAD doesn't cover all instances in ScanRefer, please download the mapping file and place it under CONF.PATH.SCAN2CAD. Parsing the raw Scan2CAD annotations by the following command:

python scripts/Scan2CAD_to_ScanNet.py

Setup

Please execute the following command to install PyTorch 1.8:

conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=10.2 -c pytorch

Install the necessary packages listed out in requirements.txt:

pip install -r requirements.txt

And don't forget to refer to Pytorch Geometric to install the graph support.

After all packages are properly installed, please run the following commands to compile the CUDA modules for the PointNet++ backbone:

cd lib/pointnet2
python setup.py install

Before moving on to the next step, please don't forget to set the project root path to the CONF.PATH.BASE in lib/config.py.

Data preparation

  1. Download the ScanRefer dataset and unzip it under data/ - You might want to run python scripts/organize_scanrefer.py to organize the data a bit.
  2. Download the preprocessed GLoVE embeddings (~990MB) and put them under data/.
  3. Download the ScanNetV2 dataset and put (or link) scans/ under (or to) data/scannet/scans/ (Please follow the ScanNet Instructions for downloading the ScanNet dataset).

After this step, there should be folders containing the ScanNet scene data under the data/scannet/scans/ with names like scene0000_00

  1. Pre-process ScanNet data. A folder named scannet_data/ will be generated under data/scannet/ after running the following command. Roughly 3.8GB free space is needed for this step:
cd data/scannet/
python batch_load_scannet_data.py

After this step, you can check if the processed scene data is valid by running:

python visualize.py --scene_id scene0000_00
  1. (Optional) Pre-process the multiview features from ENet.

    a. Download the ENet pretrained weights (1.4MB) and put it under data/

    b. Download and decompress the extracted ScanNet frames (~13GB).

    c. Change the data paths in config.py marked with TODO accordingly.

    d. Extract the ENet features:

    python scripts/compute_multiview_features.py

    e. Project ENet features from ScanNet frames to point clouds; you need ~36GB to store the generated HDF5 database:

    python scripts/project_multiview_features.py --maxpool

    You can check if the projections make sense by projecting the semantic labels from image to the target point cloud by:

    python scripts/project_multiview_labels.py --scene_id scene0000_00 --maxpool

Usage

End-to-End training for 3D dense captioning

Run the following script to start the end-to-end training of Scan2Cap model using the multiview features and normals. For more training options, please run scripts/train.py -h:

python scripts/train.py --use_multiview --use_normal --use_topdown --use_relation --use_orientation --num_graph_steps 2 --num_locals 10 --batch_size 12 --epoch 50

The trained model as well as the intermediate results will be dumped into outputs/ . For evaluating the model (@0.5IoU), please run the following script and change the accordingly, and note that arguments must match the ones for training:

python scripts/eval.py --folder <output_folder> --use_multiview --use_normal --use_topdown --use_relation --num_graph_steps 2 --num_locals 10 --eval_caption --min_iou 0.5

Evaluating the detection performance:

python scripts/eval.py --folder <output_folder> --use_multiview --use_normal --use_topdown --use_relation --num_graph_steps 2 --num_locals 10 --eval_detection

You can even evaluate the pretraiend object detection backbone:

python scripts/eval.py --folder <output_folder> --use_multiview --use_normal --use_topdown --use_relation --num_graph_steps 2 --num_locals 10 --eval_detection --eval_pretrained

If you want to visualize the results, please run this script to generate bounding boxes and descriptions for scene to outputs/ :

python scripts/visualize.py --folder <output_folder> --scene_id <scene_id> --use_multiview --use_normal --use_topdown --use_relation --num_graph_steps 2 --num_locals 10

Note that you need to run python scripts/export_scannet_axis_aligned_mesh.py first to generate axis-aligned ScanNet mesh files.

3D dense captioning with ground truth bounding boxes

For experimenting the captioning performance with ground truth bounding boxes, you need to extract the box features with a pre-trained extractor. The pretrained ones are already in pretrained, but if you want to train a new one from scratch, run the following script:

python scripts/train_maskvotenet.py --batch_size 8 --epoch 200 --lr 1e-3 --wd 0 --use_multiview --use_normal

The pretrained model will be stored under outputs/ . Before we proceed, you need to move the to pretrained/ and change the name of the folder to XYZ_MULTIVIEW_NORMAL_MASKS_VOTENET, which must reflect the features while training, e.g. MULTIVIEW -> --use_multiview.

After that, let's run the following script to extract the features for the ground truth bounding boxes. Note that the feature options must match the ones in the previous steps:

python scripts/extract_gt_features.py --batch_size 16 --epoch 100 --use_multiview --use_normal --train --val

The extracted features will be stored as a HDF5 database under /gt_ _features . You need ~610MB space on your disk.

Now the box features are ready - we're good to go! Next step: run the following command to start training the dense captioning pipeline with the extraced ground truth box features:

python scripts/train_pretrained.py --mode gt --batch_size 32 --use_topdown --use_relation --use_orientation --num_graph_steps 2 --num_locals 10

For evaluating the model, run the following command:

python scripts/eval_pretrained.py --folder <ouptut_folder> --mode gt --use_topdown --use_relation --use_orientation --num_graph_steps 2 --num_locals 10 

3D dense captioning with pre-trained VoteNet bounding boxes

If you would like to play around with the pre-trained VoteNet bounding boxes, you can directly use the pre-trained VoteNet in pretrained. After picking the model you like, run the following command to extract the bounding boxes and associated box features:

python scripts/extract_votenet_features.py --batch_size 16 --epoch 100 --use_multiview --use_normal --train --val

Now the box features are ready. Next step: run the following command to start training the dense captioning pipeline with the extraced VoteNet boxes:

python scripts/train_pretrained.py --mode votenet --batch_size 32 --use_topdown --use_relation --use_orientation --num_graph_steps 2 --num_locals 10

For evaluating the model, run the following command:

python scripts/eval_pretrained.py --folder <ouptut_folder> --mode votenet --use_topdown --use_relation --use_orientation --num_graph_steps 2 --num_locals 10 

Experiments on ReferIt3D

Yes, of course you can use the ReferIt3D dataset for training and evaluation. Simply download ReferIt3D dataset and unzip it under data, then run the following command to convert it to ScanRefer format:

python scripts/organize_referit3d.py

Then you can simply specify the dataset you would like to use by --dataset ReferIt3D in the aforementioned steps. Have fun!

2D Experiments

Please refer to Scan2Cad-2D for more information.

Citation

If you found our work helpful, please kindly cite our paper via:

@inproceedings{chen2021scan2cap,
  title={Scan2Cap: Context-aware Dense Captioning in RGB-D Scans},
  author={Chen, Zhenyu and Gholami, Ali and Nie{\ss}ner, Matthias and Chang, Angel X},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={3193--3203},
  year={2021}
}

License

Scan2Cap is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Copyright (c) 2021 Dave Zhenyu Chen, Ali Gholami, Matthias Nießner, Angel X. Chang

Comments
  • Training time too long

    Training time too long

    Thanks for releasing the code! I really appreciate your hard work!

    When I followed the instructions and ran the training command python scripts/train.py --use_multiview --use_normal --use_topdown --use_relation --use_orientation --num_graph_steps 2 --num_locals 10 --batch_size 12 --epoch 50, I found that it would take nearly 14 hours (as shown in the screenshot below) to validate simply on the training set. Is it normal to have such a long time period for validation? I use one GeForce RTX 2080 GPU with 11GB of memory. Screen Shot 2021-07-27 at 3 19 43 pm

    opened by xwhjy 13
  • Benchmark Challenge

    Benchmark Challenge

    Hi, I'm really interested in the Scan2Cap Dense Captioning Benchmark, but I have some problems.

    I try to run the following command. python benchmark/predict.py --folder <output_folder> --use_multiview --use_normal --use_topdown --use_relation --num_graph_steps 2 --num_locals 10

    Then, it encountered an error.

    截屏2022-09-05 18 25 47

    I guess the reason for this error is that the ScanNet dataset does not provide point cloud data for scene 707 to scene 806.

    The file structure of the original ScanNet dataset I downloaded is shown below.

    截屏2022-09-05 18 35 57

    Is the ScanNet dataset I downloaded missing? Or is there something wrong with the code?

    opened by yvfengZhong 10
  • Caught KeyError in DataLoader worker process 0.

    Caught KeyError in DataLoader worker process 0.

    Hello, I am following the steps in "Setup" to try to run the code, but I encountered the following problem, can you help me solve it? My machine environment is: Ubuntu18.04, Michine Memory 8G, GPU 12G(Tesla K80), Cuda 10.2, Python 3.7.0, PyTorch 1.8.0, etc.

    python ./scripts/train.py --debug --use_color --use_normal --use_topdown --use_relation --use_orientation --num_graph_steps 2 --num_locals 10 --batch_size 12 --epoch 50

    I typed the above command on the command line just for debugging and got the following error: image

    In addition, I find that my machine memory is a little small, which is one of the reasons why the program can't run. But, confusingly, in debug mode, where only one piece of data is loaded, the above problems still occur. As shown in the figure below, I did not download the data in Step 5. Is that the cause of the problem? Notice that I replaced "--use_multiview" with "--use_color" in the command

    image

    I am looking forward to your early reply. Thank you very much

    opened by yang-Michael 7
  • Training time suddenly becomes longer.

    Training time suddenly becomes longer.

    Thank you very much for your work. I have encountered some problems here.

    In the first 19 epochs, the mean_ iter_time is kept at about 2 seconds. 屏幕快照 2021-09-06 上午10 51 42

    But in the 20th epoch, the mean_ iter_time suddenly became about a minute. 屏幕快照 2021-09-06 上午10 52 25

    It puzzles me.

    I have read another related answer about Training time too long, but I don't know why there will be such a big change after the 20th epoch.

    I look forward to your answer and wish you success in your work.

    opened by yvfengZhong 4
  • Unable to train

    Unable to train

    Hello, thanks for your work and code!

    When I followed the instructions and ran the training command python scripts/train.py --use_multiview --use_normal --use_topdown --use_relation --use_orientation --num_graph_steps 2 --num_locals 10 --batch_size 12 --epoch 50, after a week of training, the output log still stayed in "epoch 1 starting...".

    I think this is obviously unreasonable, but I don't know what the problem is. I use three GeForce RTX 2080 GPU with 11GB of memory.

    opened by yvfengZhong 4
  • About the results

    About the results

    I'm curious about the criteria you reported in your paper.

    Did the four criteria (BLEU-4, CIDEr, METEOR, ROUGE-L) reported come from the model with the best cider score reached on validation set, or the best of each criterion on the validation set?

    opened by ch3cook-fdu 1
  • about visualization

    about visualization

    Hello! Thanks for your work. could you pls tell me how to visualize the scene like you show in README? Put the .ply file into Meshlab? Or other software?

    opened by jiaminglei-lei 1
  • GPU configuration

    GPU configuration

    Your work is really interesting! May I ask what's the GPU configuration in your experiments? How many GPUs and memory required to run the experiments? Looking forward to your reply!

    opened by xwhjy 1
  • Update load_scannet_data.py

    Update load_scannet_data.py

    The aligned_instance_bboxes is computed using axis aligned mesh vertices, which means it is different than the bboxes computed using vertices from the original mesh(.ply) file. So aligned_instance_bboxes should be passed to be saved as output_file+'_aligned_bbox.npy'

    opened by databaaz 0
  • about “3D dense captioning with ground truth bounding boxes”

    about “3D dense captioning with ground truth bounding boxes”

    hi~,I have a problem about using maskvotenet to get visual feature of GT bbox,In your code ,you just get One target object's feature,Do you konw how to get all GT bbox feature?of course,for Scan2Cap task,just need one target object's feature,but aboout visual grounding task,we need all GT bbox feature.thank you~

    opened by Samchengjiaming 0
  • different experiment settings between training end to end scan2cap and the fixed-detector scan2cap

    different experiment settings between training end to end scan2cap and the fixed-detector scan2cap

    In end-to-end scan2cap, the relation graph's input is the origin proposals without nms. However in fixed-detector scan2cap, the relation graph's input is the origin proposals with nms. I think that's not a fair comparison. I've performed experiments with pre-fetched votenet features without nms, and use train_pretrained.py to train the fix-detector's performance. The result shows that the fixed-detector one actually out-performs the end-to-end one.

    opened by ch3cook-fdu 0
  • About output info

    About output info "error occurs when dealing with graph, skipping..." during training

    Thanks for your great work in the 3D dense caption! I found the info "error occurs when dealing with graph, skipping..." occasionally emerges during the training phase. I am wondering what happened and will it affect the model's performance?

    opened by SxJyJay 0
  • How could we get folder render-based/ used in Scan2Cap-2d

    How could we get folder render-based/ used in Scan2Cap-2d

    Hi Scan2Cap team,

    I've mentioned that several folders like "projected-based/renders" "render-based/renders" appeared in the conf.py, but I didn't find these images from ScanNet, ScanRefer or Scan2Cap. Did we use some script to generate these images or other project generate these images?

    Also, did we use rendered pictures with bounding box on it to generate global feature? It hard to know without running these code.

    Thanks, Yui

    opened by YuiTH 2
  • How to learn relative orientations

    How to learn relative orientations

    Hi,your work is very great. Could you explain indetail why you use "CAD model alignment annotations" to learn the object relative orientations in the relational graph ? I would appreciate it if you could answer me.

    opened by cuiheng1234 0
Owner
Dave Z. Chen
PhD candidate at TUM
Dave Z. Chen
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

NingWang 236 Dec 22, 2022
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)

Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CAC) Xin Lai*, Zhuotao Tian*, Li Jiang, Shu Liu, Hengshuang Zhao, Li

Jia Research Lab 137 Dec 14, 2022
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)

Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CAC) Xin Lai*, Zhuotao Tian*, Li Jiang, Shu Liu, Hengshuang Zhao, Li

DV Lab 137 Dec 14, 2022
Simple image captioning model - CLIP prefix captioning.

Simple image captioning model - CLIP prefix captioning.

null 688 Jan 4, 2023
DSAC* for Visual Camera Re-Localization (RGB or RGB-D)

DSAC* for Visual Camera Re-Localization (RGB or RGB-D) Introduction Installation Data Structure Supported Datasets 7Scenes 12Scenes Cambridge Landmark

Visual Learning Lab 143 Dec 22, 2022
Exploring Relational Context for Multi-Task Dense Prediction [ICCV 2021]

Adaptive Task-Relational Context (ATRC) This repository provides source code for the ICCV 2021 paper Exploring Relational Context for Multi-Task Dense

David Brüggemann 35 Dec 5, 2022
Weakly Supervised Dense Event Captioning in Videos, i.e. generating multiple sentence descriptions for a video in a weakly-supervised manner.

WSDEC This is the official repo for our NeurIPS paper Weakly Supervised Dense Event Captioning in Videos. Description Repo directories ./: global conf

Melon(Xuguang Duan) 96 Nov 1, 2022
A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.

ManhattanSLAM Authors: Raza Yunus, Yanyan Li and Federico Tombari ManhattanSLAM is a real-time SLAM library for RGB-D cameras that computes the camera

null 117 Dec 28, 2022
Diverse Image Captioning with Context-Object Split Latent Spaces (NeurIPS 2020)

Diverse Image Captioning with Context-Object Split Latent Spaces This repository is the PyTorch implementation of the paper: Diverse Image Captioning

Visual Inference Lab @TU Darmstadt 34 Nov 21, 2022
Codes for paper "Towards Diverse Paragraph Captioning for Untrimmed Videos". CVPR 2021

Towards Diverse Paragraph Captioning for Untrimmed Videos This repository contains PyTorch implementation of our paper Towards Diverse Paragraph Capti

Yuqing Song 61 Oct 11, 2022
Deep RGB-D Saliency Detection with Depth-Sensitive Attention and Automatic Multi-Modal Fusion (CVPR'2021, Oral)

DSA^2 F: Deep RGB-D Saliency Detection with Depth-Sensitive Attention and Automatic Multi-Modal Fusion (CVPR'2021, Oral) This repo is the official imp

如今我已剑指天涯 46 Dec 21, 2022
Fast and Context-Aware Framework for Space-Time Video Super-Resolution (VCIP 2021)

Fast and Context-Aware Framework for Space-Time Video Super-Resolution Preparation Dependencies PyTorch 1.2.0 CUDA 10.0 DCNv2 cd model/DCNv2 bash make

Xueheng Zhang 1 Mar 29, 2022
Syntax-Aware Action Targeting for Video Captioning

Syntax-Aware Action Targeting for Video Captioning Code for SAAT from "Syntax-Aware Action Targeting for Video Captioning" (Accepted to CVPR 2020). Th

null 59 Oct 13, 2022
PyTorch implementation of ShapeConv: Shape-aware Convolutional Layer for RGB-D Indoor Semantic Segmentation.

Shape-aware Convolutional Layer (ShapeConv) PyTorch implementation of ShapeConv: Shape-aware Convolutional Layer for RGB-D Indoor Semantic Segmentatio

Hanchao Leng 82 Dec 29, 2022
Official PyTorch implementation of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image", ICCV 2019

PoseNet of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image" Introduction This repo is official Py

Gyeongsik Moon 677 Dec 25, 2022
Edge-aware Guidance Fusion Network for RGB-Thermal Scene Parsing

EGFNet Edge-aware Guidance Fusion Network for RGB-Thermal Scene Parsing Dataset and Results Test maps: 百度网盘 提取码:zust Citation @ARTICLE{ author={Zhou,

ShaohuaDong 10 Dec 8, 2022
Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021.

Dense Contrastive Learning for Self-Supervised Visual Pre-Training This project hosts the code for implementing the DenseCL algorithm for se

Xinlong Wang 491 Jan 3, 2023
Quasi-Dense Similarity Learning for Multiple Object Tracking, CVPR 2021 (Oral)

Quasi-Dense Tracking This is the offical implementation of paper Quasi-Dense Similarity Learning for Multiple Object Tracking. We present a trailer th

ETH VIS Research Group 327 Dec 27, 2022
Tensorflow implementation of the paper "HumanGPS: Geodesic PreServing Feature for Dense Human Correspondences", CVPR 2021.

HumanGPS: Geodesic PreServing Feature for Dense Human Correspondences Tensorflow implementation of the paper "HumanGPS: Geodesic PreServing Feature fo

Google Interns 50 Dec 21, 2022