Demo code for ICCV 2021 paper "Sensor-Guided Optical Flow"

Overview

Sensor-Guided Optical Flow

Demo code for "Sensor-Guided Optical Flow", ICCV 2021

This code is provided to replicate results with flow hints obtained from LiDAR data.

At the moment, we do not plan to release training code.

[Project page] - [Paper] - [Supplementary]

Alt text

Reference

If you find this code useful, please cite our work:

@inproceedings{Poggi_ICCV_2021,
  title     = {Sensor-Guided Optical Flow},
  author    = {Poggi, Matteo and
               Aleotti, Filippo and
               Mattoccia, Stefano},
  booktitle = {IEEE/CVF International Conference on Computer Vision (ICCV)},
  year = {2021}
}

Contents

  1. Introduction
  2. Installation
  3. Data
  4. Weights
  5. Usage
  6. Contacts
  7. Acknowledgments

Introduction

This paper proposes a framework to guide an optical flow network with external cues to achieve superior accuracy either on known or unseen domains. Given the availability of sparse yet accurate optical flow hints from an external source, these are injected to modulate the correlation scores computed by a state-of-the-art optical flow network and guide it towards more accurate predictions. Although no real sensor can provide sparse flow hints, we show how these can be obtained by combining depth measurements from active sensors with geometry and hand-crafted optical flow algorithms, leading to accurate enough hints for our purpose. Experimental results with a state-of-the-art flow network on standard benchmarks support the effectiveness of our framework, both in simulated and real conditions.

Installation

Install the project requirements in a new python 3 environment:

virtualenv -p python3 guided_flow_env
source guided_flow_env/bin/activate
pip install -r requirements.txt

Compile the guided_flow module, written in C (required for guided flow modulation):

cd external/guided_flow
bash compile.sh
cd ../..

Data

Download KITTI 2015 optical flow training set and precomputed flow hints. Place them under the data folder as follows:

data
├──training
    ├──image_2
        ├── 000000_10.png
        ├── 000000_11.png
        ├── 000001_10.png
        ├── 000001_11.png
        ...
    ├──flow_occ
        ├── 000000_10.png
        ├── 000000_11.png
        ├── 000001_10.png
        ├── 000001_11.png
        ...
    ├──hints
        ├── 000002_10.png
        ├── 000002_11.png
        ├── 000003_10.png
        ├── 000003_11.png
        ...

Weights

We provide QRAFT models tested in Tab. 4. Download the weights and unzip them under weights as follows:

weights
├──raw
    ├── C.pth
    ├── CT.pth
    ...
├──guided
    ├── C.pth
    ├── CT.pth
    ...    

Usage

You are now ready to run the demo_kitti142.py script:

python demo_kitti142.py --model CTK --guided --out_dir results_CTK_guided/

Use --model to specify the weights you want to load among C, CT, CTS and CTK. By default, raw models are loaded, specify --guided to load guided weights and enable sensor-guided optical flow.

Note: Occasionally, the demo may run out of memory on ~12GB GPUs. The script saves intermediate results are saved in --out_dir. You can run again the script and it will skip all images for which intermediate results have been already saved in --out_dir, loading them from the folder. Remember to select a brand new --out_dir when you start an experiment from scratch.

In the end, the aforementioned command should print:

Validation KITTI: 2.08, 5.97

Numbers in Tab. 4 are obtained by running this code on a Titan Xp GPU, with PyTorch 1.7.0. We observed slight fluctuations in the numbers when running on different hardware (e.g., 3090 GPUs), mostly on raw models.

Contacts

m [dot] poggi [at] unibo [dot] it

Acknowledgments

Thanks to Zachary Teed for sharing RAFT code, used as codebase in our project.

You might also like...
Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.
Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

Code for ICCV 2021 paper
Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Code for the ICCV 2021 Workshop paper: A Unified Efficient Pyramid Transformer for Semantic Segmentation.

Unified-EPT Code for the ICCV 2021 Workshop paper: A Unified Efficient Pyramid Transformer for Semantic Segmentation. Installation Linux, CUDA=10.0,

Starter code for the ICCV 2021 paper, 'Detecting Invisible People'

Detecting Invisible People [ICCV 2021 Paper] [Website] Tarasha Khurana, Achal Dave, Deva Ramanan Introduction This repository contains code for Detect

Demo for Real-time RGBD-based Extended Body Pose Estimation paper
Demo for Real-time RGBD-based Extended Body Pose Estimation paper

Real-time RGBD-based Extended Body Pose Estimation This repository is a real-time demo for our paper that was published at WACV 2021 conference The ou

Demo for the paper
Demo for the paper "Overlap-aware low-latency online speaker diarization based on end-to-end local segmentation"

Streaming speaker diarization Overlap-aware low-latency online speaker diarization based on end-to-end local segmentation by Juan Manuel Coria, Hervé

Official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.
Official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.

Vision Transformer with Progressive Sampling This is the official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.

PyTorch implementation of paper: AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer, ICCV 2021.
PyTorch implementation of paper: AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer, ICCV 2021.

AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer [Paper] [PyTorch Implementation] [Paddle Implementation] Overview This reposit

Homepage of paper: Paint Transformer: Feed Forward Neural Painting with Stroke Prediction, ICCV 2021.
Homepage of paper: Paint Transformer: Feed Forward Neural Painting with Stroke Prediction, ICCV 2021.

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [PaddlePaddle Implementation] Homepage of paper: Paint Transformer: Fee

Owner
null
Demo code for paper "Learning optical flow from still images", CVPR 2021.

Depthstillation Demo code for "Learning optical flow from still images", CVPR 2021. [Project page] - [Paper] - [Supplementary] This code is provided t

null 130 Dec 25, 2022
An implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks in PyTorch.

Neural Attention Distillation This is an implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep

Yige-Li 84 Jan 4, 2023
Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting

InversePrompting Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting Code: The code is provided in the "chinese_ip"

THUDM 101 Dec 16, 2022
Code for ICCV 2021 paper "Distilling Holistic Knowledge with Graph Neural Networks"

HKD Code for ICCV 2021 paper "Distilling Holistic Knowledge with Graph Neural Networks" cifia-100 result The implementation of compared methods are ba

Wang Yucheng 30 Dec 18, 2022
code for ICCV 2021 paper 'Generalized Source-free Domain Adaptation'

G-SFDA Code (based on pytorch 1.3) for our ICCV 2021 paper 'Generalized Source-free Domain Adaptation'. [project] [paper]. Dataset preparing Download

Shiqi Yang 84 Dec 26, 2022
Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..

ARAPReg Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators.. Installation The cod

Bo Sun 132 Nov 28, 2022
Code for the ICCV 2021 paper "Pixel Difference Networks for Efficient Edge Detection" (Oral).

Pixel Difference Convolution This repository contains the PyTorch implementation for "Pixel Difference Networks for Efficient Edge Detection" by Zhuo

Alex 236 Dec 21, 2022
Sync2Gen Code for ICCV 2021 paper: Scene Synthesis via Uncertainty-Driven Attribute Synchronization

Sync2Gen Code for ICCV 2021 paper: Scene Synthesis via Uncertainty-Driven Attribute Synchronization 0. Environment Environment: python 3.6 and cuda 10

Haitao Yang 62 Dec 30, 2022
Code for the paper "Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds" (ICCV 2021)

Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds This is the official code implementation for the paper "Spatio-temporal Se

Hesper 63 Jan 5, 2023
Code release for ICCV 2021 paper "Anticipative Video Transformer"

Anticipative Video Transformer Ranked first in the Action Anticipation task of the CVPR 2021 EPIC-Kitchens Challenge! (entry: AVT-FB-UT) [project page

Facebook Research 123 Dec 13, 2022