[ICCV21] Code for RetrievalFuse: Neural 3D Scene Reconstruction with a Database

Overview

RetrievalFuse

Paper | Project Page | Video

RetrievalFuse: Neural 3D Scene Reconstruction with a Database
Yawar Siddiqui, Justus Thies, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai
ICCV2021

This repository contains the code for the ICCV 2021 paper RetrievalFuse, a novel approach for 3D reconstruction from low resolution distance field grids and from point clouds.

In contrast to traditional generative learned models which encode the full generative process into a neural network and can struggle with maintaining local details at the scene level, we introduce a new method that directly leverages scene geometry from the training database.

File and Folders


Broad code structure is as follows:

File / Folder Description
config/super_resolution Super-resolution experiment configs
config/surface_reconstruction Surface reconstruction experiment configs
config/base Defaults for configurations
config/config_handler.py Config file parser
data/splits Training and validation splits for different datasets
dataset/scene.py SceneHandler class for managing access to scene data samples
dataset/patched_scene_dataset.py Pytorch dataset class for scene data
external/ChamferDistancePytorch For calculating rough chamfer distance between prediction and target while training
model/attention.py Attention, folding and unfolding modules
model/loss.py Loss functions
model/refinement.py Refinement network
model/retrieval.py Retrieval network
model/unet.py U-Net model used as a backbone in refinement network
runs/ Checkpoint and visualizations for experiments dumped here
trainer/train_retrieval.py Lightning module for training retrieval network
trainer/train_refinement.py Lightning module for training refinement network
util/arguments.py Argument parsing (additional arguments apart from those in config)
util/filesystem_logger.py For copying source code for each run in the experiment log directory
util/metrics.py Rough metrics for logging during training
util/mesh_metrics.py Final metrics on meshes
util/retrieval.py Script to dump retrievals once retrieval networks have been trained; needed for training refinement.
util/visualizations.py Utility scripts for visualizations

Further, the data/ directory has the following layout

data                    # root data directory
├── sdf_008             # low-res (8^3) distance fields
    ├── 
   
         
        ├── 
    
     
        ├── 
     
      
        ├── 
      
       
        ...
    ├── 
       
         ... ├── sdf_016 # low-res (16^3) distance fields ├── 
        
          ├── 
         
           ├── 
          
            ├── 
           
             ... ├── 
            
              ... ├── sdf_064 # high-res (64^3) distance fields ├── 
             
               ├── 
              
                ├── 
               
                 ├── 
                
                  ... ├── 
                 
                   ... ├── pc_20K # point cloud inputs ├── 
                  
                    ├── 
                   
                     ├── 
                    
                      ├── 
                     
                       ... ├── 
                      
                        ... ├── splits # train/val splits ├── size # data needed by SceneHandler class (autocreated on first run) ├── occupancy # data needed by SceneHandler class (autocreated on first run) 
                      
                     
                    
                   
                  
                 
                
               
              
             
            
           
          
         
        
       
      
     
    
   

Dependencies


Install the dependencies using pip ```bash pip install -r requirements.txt ``` Be sure that you pull the `ChamferDistancePytorch` submodule in `external`.

Data Preparation


For ShapeNetV2 and Matterport, get the appropriate meshes from the datasets. For 3DFRONT get the 3DFUTURE meshes and 3DFRONT scripts. For getting 3DFRONT meshes use our fork of 3D-FRONT-ToolBox to create room meshes.

Once you have the meshes, use our fork of sdf-gen to create distance field low-res inputs and high-res targets. For creating point cloud inputs simply use trimesh.sample.sample_surface (check util/misc/sample_scene_point_clouds). Place the processed data in appropriate directories:

  • data/sdf_008/ or data/sdf_016/ for low-res inputs

  • data/pc_20K/ for point clouds inputs

  • data/sdf_064/ for targets

Training the Retrieval Network


To train retrieval networks use the following command:

python trainer/train_retrieval.py --config config/<config> --val_check_interval 5 --experiment retrieval --wandb_main --sanity_steps 1

We provide some sample configurations for retrieval.

For super-resolution, e.g.

  • config/super_resolution/ShapeNetV2/retrieval_008_064.yaml
  • config/super_resolution/3DFront/retrieval_008_064.yaml
  • config/super_resolution/Matterport3D/retrieval_016_064.yaml

For surface-reconstruction, e.g.

  • config/surface_reconstruction/ShapeNetV2/retrieval_128_064.yaml
  • config/surface_reconstruction/3DFront/retrieval_128_064.yaml
  • config/surface_reconstruction/Matterport3D/retrieval_128_064.yaml

Once trained, create the retrievals for train/validation set using the following commands:

python util/retrieval.py  --mode map --retrieval_ckpt <trained_retrieval_ckpt> --config <retrieval_config>
python util/retrieval.py --mode compose --retrieval_ckpt <trained_retrieval_ckpt> --config <retrieval_config> 

Training the Refinement Network


Use the following command to train the refinement network

python trainer/train_refinement.py --config <config> --val_check_interval 5 --experiment refinement --sanity_steps 1 --wandb_main --retrieval_ckpt <retrieval_ckpt>

Again, sample configurations for refinement are provided in the config directory.

For super-resolution, e.g.

  • config/super_resolution/ShapeNetV2/refinement_008_064.yaml
  • config/super_resolution/3DFront/refinement_008_064.yaml
  • config/super_resolution/Matterport3D/refinement_016_064.yaml

For surface-reconstruction, e.g.

  • config/surface_reconstruction/ShapeNetV2/refinement_128_064.yaml
  • config/surface_reconstruction/3DFront/refinement_128_064.yaml
  • config/surface_reconstruction/Matterport3D/refinement_128_064.yaml

Visualizations and Logs


Visualizations and checkpoints are dumped in the `runs/` directory. Logs are uploaded to the user's [Weights&Biases](https://wandb.ai/site) dashboard.

Citation


If you find our work useful in your research, please consider citing:
@inproceedings{siddiqui2021retrievalfuse,
  title = {RetrievalFuse: Neural 3D Scene Reconstruction with a Database},
  author = {Siddiqui, Yawar and Thies, Justus and Ma, Fangchang and Shan, Qi and Nie{\ss}ner, Matthias and Dai, Angela},
  booktitle = {Proc. International Conference on Computer Vision (ICCV)},
  month = oct,
  year = {2021},
  doi = {},
  month_numeric = {10}
}

License


The code from this repository is released under the MIT license.
Comments
  • About evaluation for mesh.

    About evaluation for mesh.

    Hi, thanks for your work on reconstruction. I'm interested in the evaluation for mesh, but when I perfoming evaluation with retrieval-fuse/util/mesh_metrics.py (https://github.com/nihalsid/retrieval-fuse/blob/fce90fa6adf349a3c7bb5eb4b57d387d4f6ff46c/util/mesh_metrics.py) on GT mesh, i.e. the same mesh for prediction and target, the chamferL1 is 0.017 and the normals_correctness is 0.801, which should be 0 and 1 theoretically to my understanding. What should I do to get the correct 3d mesh evaluation results?

    opened by guangkaixu 6
  • Marching cubes missing

    Marching cubes missing

    Hi, the utils/marching_cubes file is missing from this repo, making import marching_cubes as mc fail. I can see that "util/marching_cubes/*.pyd" is included in the .gitgnore, so would it be possible to provide this file, or let me know what marching cubes implementation this project requires?

    Thanks!

    opened by DeltaMarine101 2
  • Attention and Fuse seem to not improve performance in Shapenet Dataset

    Attention and Fuse seem to not improve performance in Shapenet Dataset

    Hi,

    I've trained and evaluated the network with the Shapenet data. Somehow the performance decreases in the retrieval and attention phase, is that something that is supposed to happen? The performance increases again in phase 3 (full forward phase). However after training all phases, I evaluated the model's prediction based on attention with retrievals compared to just the unet prediction and the prediction seems to be roughly the same --> The retrievals do not improve the performance. Did you evaluate that aswell on the Shapenet dataset or is there something that might have gone wrong?

    Kind regards

    opened by robinp456 8
  • Problem setting up data

    Problem setting up data

    Hi, I was hoping you could help me understand what data goes where in the /data folder. After running sdf-gen, there are 6 output folders, --df_lowres_dir, --df_highres_dir, --df_if_dir, --chunk_lowres_dir, --chunk_highres_dir, and --chunk_if_dir. I think I've gathered that I should use --chunk_lowres_dir in the /sdf_008 folder, and --chunk_highres_dirin the /sdf_064 folder. I have a script for sampling a pointcloud from these using your function 'misc.sample_scene_point_clouds', and since the folder is called /pc_20K I assume I should sample 20k points?

    After all this though trying to train for retrieval using trainer/train_retrieval.py gives error

    No such file or directory: 'data\sdf_064\3DFront\00004f89-9aa5-43c2-ae3c-129586be8aaa__MasterBedroom-5863__64__000_000_000.npz'

    Where do these .npz files come from, since sdf-gen returns .npy files? I did get .npz when sampling the pointcloud for /pc_20K, but I'm really not sure where they come from for the sdf's.

    Any help in this matter would be greatly appreciated!

    opened by DeltaMarine101 6
  • Some questions about the checkpoints for retrieval and refinement for Matterport3D and test on the real data

    Some questions about the checkpoints for retrieval and refinement for Matterport3D and test on the real data

    Thanks for your great work. I want to test in real indoor scenes and reconstruct the 3d model of real data. Could you release the checkpoints for for retrieval and refinement for Matterport3D? How can I use the checkpoints of ShapeNet to reconstruct my own data? Thank you.

    opened by asedfrgt 1
Owner
Yawar Nihal Siddiqui
Yawar Nihal Siddiqui
Code for "Neural 3D Scene Reconstruction with the Manhattan-world Assumption" CVPR 2022 Oral

News 05/10/2022 To make the comparison on ScanNet easier, we provide all quantitative and qualitative results of baselines here, including COLMAP, COL

ZJU3DV 365 Dec 30, 2022
[ICCV21] Self-Calibrating Neural Radiance Fields

Self-Calibrating Neural Radiance Fields, ICCV, 2021 Project Page | Paper | Video Author Information Yoonwoo Jeong [Google Scholar] Seokjun Ahn [Google

null 381 Dec 30, 2022
Neural Scene Graphs for Dynamic Scene (CVPR 2021)

Implementation of Neural Scene Graphs, that optimizes multiple radiance fields to represent different objects and a static scene background. Learned representations can be rendered with novel object compositions and views.

null 151 Dec 26, 2022
Ranking Models in Unlabeled New Environments (iccv21)

Ranking Models in Unlabeled New Environments Prerequisites This code uses the following libraries Python 3.7 NumPy PyTorch 1.7.0 + torchivision 0.8.1

null 14 Dec 17, 2021
A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.

ManhattanSLAM Authors: Raza Yunus, Yanyan Li and Federico Tombari ManhattanSLAM is a real-time SLAM library for RGB-D cameras that computes the camera

null 117 Dec 28, 2022
"MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction" (CVPRW 2022) & (Winner of NTIRE 2022 Challenge on Spectral Reconstruction from RGB)

MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction (CVPRW 2022) Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Z

Yuanhao Cai 274 Jan 5, 2023
Code for CVPR 2021 oral paper "Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts"

Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts The rapid progress in 3D scene understanding has come with growing dem

Facebook Research 182 Dec 30, 2022
Official PyTorch code of DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context Graph and Relation-based Optimization (ICCV 2021 Oral).

DeepPanoContext (DPC) [Project Page (with interactive results)][Paper] DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context G

Cheng Zhang 66 Nov 16, 2022
Implementation of CVPR'21: RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction

RfD-Net [Project Page] [Paper] [Video] RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction Yinyu Nie, Ji Hou, Xiaoguang Han, Matthi

Yinyu Nie 162 Jan 6, 2023
[TIP 2020] Multi-Temporal Scene Classification and Scene Change Detection with Correlation based Fusion

Multi-Temporal Scene Classification and Scene Change Detection with Correlation based Fusion Code for Multi-Temporal Scene Classification and Scene Ch

Lixiang Ru 33 Dec 12, 2022
A weakly-supervised scene graph generation codebase. The implementation of our CVPR2021 paper ``Linguistic Structures as Weak Supervision for Visual Scene Graph Generation''

README.md shall be finished soon. WSSGG 0 Overview 1 Installation 1.1 Faster-RCNN 1.2 Language Parser 1.3 GloVe Embeddings 2 Settings 2.1 VG-GT-Graph

Keren Ye 35 Nov 20, 2022
Automatic number plate recognition using tech: Yolo, OCR, Scene text detection, scene text recognation, flask, torch

Automatic Number Plate Recognition Automatic Number Plate Recognition (ANPR) is the process of reading the characters on the plate with various optica

Meftun AKARSU 52 Dec 22, 2022
Pytorch implementation of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors

Make-A-Scene - PyTorch Pytorch implementation (inofficial) of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors (https://arxiv.org/

Casual GAN Papers 259 Dec 28, 2022
The code for the CVPR 2021 paper Neural Deformation Graphs, a novel approach for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects.

Neural Deformation Graphs Project Page | Paper | Video Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction Aljaž Božič, Pablo P

Aljaz Bozic 134 Dec 16, 2022
This is the code repository implementing the paper "TreePartNet: Neural Decomposition of Point Clouds for 3D Tree Reconstruction".

TreePartNet This is the code repository implementing the paper "TreePartNet: Neural Decomposition of Point Clouds for 3D Tree Reconstruction". Depende

刘彦超 34 Nov 30, 2022
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

DeepMind 892 Dec 28, 2022
Neural implicit reconstruction experiments for the Vector Neuron paper

Neural Implicit Reconstruction with Vector Neurons This repository contains code for the neural implicit reconstruction experiments in the paper Vecto

Congyue Deng 35 Jan 2, 2023
Image reconstruction done with untrained neural networks.

PyTorch Deep Image Prior An implementation of image reconstruction methods from Deep Image Prior (Ulyanov et al., 2017) in PyTorch. The point of the p

Atiyo Ghosh 192 Nov 30, 2022
Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance

Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance Project Page | Paper | Data This repository contains an implementatio

Lior Yariv 521 Dec 30, 2022