Self-Supervised Learning of Event-based Optical Flow with Spiking Neural Networks

Overview

Self-Supervised Learning of Event-based Optical Flow with Spiking Neural Networks

Work accepted at NeurIPS'21 [paper, video].

If you use this code in an academic context, please cite our work:

@article{hagenaarsparedesvalles2021ssl,
  title={Self-Supervised Learning of Event-Based Optical Flow with Spiking Neural Networks},
  author={Hagenaars, Jesse and Paredes-Vall\'es, Federico and de Croon, Guido},
  journal={Advances in Neural Information Processing Systems},
  volume={34},
  year={2021}
}

This code allows for the reproduction of the experiments leading to the results in Section 4.1.

Usage

This project uses Python >= 3.7.3 and we strongly recommend the use of virtual environments. If you don't have an environment manager yet, we recommend pyenv. It can be installed via:

curl https://pyenv.run | bash

Make sure your ~/.bashrc file contains the following:

export PATH="$HOME/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

After that, restart your terminal and run:

pyenv update

To set up your environment with pyenv first install the required python distribution and make sure the installation is successful (i.e., no errors nor warnings):

pyenv install -v 3.7.3

Once this is done, set up the environment and install the required libraries:

pyenv virtualenv 3.7.3 event_flow
pyenv activate event_flow

pip install --upgrade pip==20.0.2

cd event_flow/
pip install -r requirements.txt

Download datasets

In this work, we use multiple datasets:

These datasets can be downloaded in the expected HDF5 data format from here, and are expected at event_flow/datasets/data/ (as shown above).

Download size: 19.4 GB. Uncompressed size: 94 GB.

Details about the structure of these files can be found in event_flow/datasets/tools/.

Download models

The pretrained models can be downloaded from here, and are expected at event_flow/mlruns/.

In this project we use MLflow to keep track of the experiments. To visualize the models that are available, alongside other useful details and evaluation metrics, run the following from the home directory of the project:

mlflow ui

and access http://127.0.0.1:5000 from your browser of choice.

Inference

To estimate optical flow from event sequences from the MVSEC dataset and compute the average endpoint error and percentage of outliers, run:

python eval_flow.py <model_name> --config configs/eval_MVSEC.yml

# for example:
python eval_flow.py LIFFireNet --config configs/eval_MVSEC.yml

where <model_name> is the name of MLflow run to be evaluated. Note that, if a run does not have a name (this would be the case for your own trained models), you can evaluated it through its run ID (also visible through MLflow).

To estimate optical flow from event sequences from the ECD or HQF datasets, run:

python eval_flow.py <model_name> --config configs/eval_ECD.yml
python eval_flow.py <model_name> --config configs/eval_HQF.yml

# for example:
python eval_flow.py LIFFireNet --config configs/eval_ECD.yml

Note that the ECD and HQF datasets lack ground truth optical flow data. Therefore, we evaluate the quality of the estimated event-based optical flow via the self-supervised FWL (Stoffregen and Scheerlinck, ECCV'20) and RSAT (ours, Appendix C) metrics.

Results from these evaluations are stored as MLflow artifacts.

In configs/, you can find the configuration files associated to these scripts and vary the inference settings (e.g., number of input events, activate/deactivate visualization).

Training

Run:

python train_flow.py --config configs/train_ANN.yml
python train_flow.py --config configs/train_SNN.yml

to train an traditional artificial neural network (ANN, default: FireNet) or a spiking neural network (SNN, default: LIF-FireNet), respectively. In configs/, you can find the aforementioned configuration files and vary the training settings (e.g., model, number of input events, activate/deactivate visualization). For other models available, see models/model.py.

Note that we used a batch size of 8 in our experiments. Depending on your computational resources, you may need to lower this number.

During and after the training, information about your run can be visualized through MLflow.

Uninstalling pyenv

Once you finish using our code, you can uninstall pyenv from your system by:

  1. Removing the pyenv configuration lines from your ~/.bashrc.
  2. Removing its root directory. This will delete all Python versions that were installed under the $HOME/.pyenv/versions/ directory:
rm -rf $HOME/.pyenv/
You might also like...
A project that uses optical flow and machine learning to detect aimhacking in video clips.

waldo-anticheat A project that aims to use optical flow and machine learning to visually detect cheating or hacking in video clips from fps games. Che

A project that uses optical flow and machine learning to detect aimhacking in video clips.

waldo-anticheat A project that aims to use optical flow and machine learning to visually detect cheating or hacking in video clips from fps games. Che

UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss

UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss This repository contains the TensorFlow implementation of the paper UnF

The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.
The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.

Published by SpaceML • About SpaceML • Quick Colab Example Self-Supervised Learner The Self-Supervised Learner can be used to train a classifier with

Spiking Neural Network for Computer Vision using SpikingJelly framework and Pytorch-Lightning

Spiking Neural Network for Computer Vision using SpikingJelly framework and Pytorch-Lightning

A lightweight deep network for fast and accurate optical flow estimation.
A lightweight deep network for fast and accurate optical flow estimation.

FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation The official PyTorch implementation of FastFlowNet (ICRA 2021). Authors: Lingtong

 a reimplementation of Optical Flow Estimation using a Spatial Pyramid Network in PyTorch
a reimplementation of Optical Flow Estimation using a Spatial Pyramid Network in PyTorch

pytorch-spynet This is a personal reimplementation of SPyNet [1] using PyTorch. Should you be making use of this work, please cite the paper according

A fast model to compute optical flow between two input images.
A fast model to compute optical flow between two input images.

DCVNet: Dilated Cost Volumes for Fast Optical Flow This repository contains our implementation of the paper: @InProceedings{jiang2021dcvnet, title={

Demo code for ICCV 2021 paper
Demo code for ICCV 2021 paper "Sensor-Guided Optical Flow"

Sensor-Guided Optical Flow Demo code for "Sensor-Guided Optical Flow", ICCV 2021 This code is provided to replicate results with flow hints obtained f

Comments
  • confused about the implement of gradient of arctan x

    confused about the implement of gradient of arctan x

    https://github.com/tudelft/event_flow/blob/7d90c9dcbd989ded49c34d32b50796ecdd5fbb88/models/spiking_util.py#L42 The derivative of arctan x is 1/(1 + x**2), but in your code it's 1 / (1 + width * x.abs()) ** 2.Is there anything wrong? I appreciate your quick response.

    opened by lbh666 1
  • "SpikingRecEVFlowNet" encoder and decoder framework for image reconstruction using event camera

    @Huizerd ,

    I was using your module "SpikingRecEVFlowNet" as a a network to reconstruct images from event camera rather than optical flow and according modified the input and output channels compatible to image reconstruction. After making the environmental setup. I landed with bad image reconstruction results Custom_SNN

    Left is the target image and right is the predicted image. Iam using supervised technique rather than self-supervised framework using temporal consistency and LIPS loss function.

    Can you please suggest in your module "SpikingRecEVFlowNet" encoder and decoder framework what changes needs to be incorporated apart from input and output channels. Requesting you kindly to help on this regard

    opened by ChidanandKumarKS 1
  • Confused about the  evaluation result of my trained model

    Confused about the evaluation result of my trained model

    I trained a spiking neural network with the command (python train_flow.py --configs/train_SNN.yml). Then, to test my trained model, I run the command (python eval_flow.py <model_name> --config configs/eval_MVSEC.yml), where for I use my own trained model ID. However, the estimation results of my trained model are not as good as the pre-trained model. How do I reproduce the results of the pre-trained model? my trained model: image pretrained model: image

    I appreciate your quick response.^.^

    opened by dedewoov 0
Owner
TU Delft
TU Delft - MAVLab
TU Delft
A machine learning library for spiking neural networks. Supports training with both torch and jax pipelines, and deployment to neuromorphic hardware.

Rockpool Rockpool is a Python package for developing signal processing applications with spiking neural networks. Rockpool allows you to build network

SynSense 21 Dec 14, 2022
Deep and online learning with spiking neural networks in Python

Introduction The brain is the perfect place to look for inspiration to develop more efficient neural networks. One of the main differences with modern

Jason Eshraghian 447 Jan 3, 2023
A PyTorch implementation of EventProp [https://arxiv.org/abs/2009.08378], a method to train Spiking Neural Networks

Spiking Neural Network training with EventProp This is an unofficial PyTorch implemenation of EventProp, a method to compute exact gradients for Spiki

Pedro Savarese 35 Jul 29, 2022
Pytorch Implementation of Spiking Neural Networks Calibration, ICML 2021

SNN_Calibration Pytorch Implementation of Spiking Neural Networks Calibration, ICML 2021 Feature Comparison of SNN calibration: Features SNN Direct Tr

Yuhang Li 60 Dec 27, 2022
PyTorch implementation of Spiking Neural Networks trained on surrogate gradient & BPTT using snntorch.

snn-localization repo PyTorch implementation of Spiking Neural Networks trained on surrogate gradient & BPTT using snntorch. Install Dependencies Orig

Sami BARCHID 1 Jan 6, 2022
Pytorch implementation of FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks

flownet2-pytorch Pytorch implementation of FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks. Multiple GPU training is supported, a

NVIDIA Corporation 2.8k Dec 27, 2022
MMFlow is an open source optical flow toolbox based on PyTorch

Documentation: https://mmflow.readthedocs.io/ Introduction English | 简体中文 MMFlow is an open source optical flow toolbox based on PyTorch. It is a part

OpenMMLab 688 Jan 6, 2023
Demo code for paper "Learning optical flow from still images", CVPR 2021.

Depthstillation Demo code for "Learning optical flow from still images", CVPR 2021. [Project page] - [Paper] - [Supplementary] This code is provided t

null 130 Dec 25, 2022
Learning Optical Flow from a Few Matches (CVPR 2021)

Learning Optical Flow from a Few Matches This repository contains the source code for our paper: Learning Optical Flow from a Few Matches CVPR 2021 Sh

Shihao Jiang (Zac) 159 Dec 16, 2022
Unsupervised Learning of Multi-Frame Optical Flow with Occlusions

This is a Pytorch implementation of Janai, J., Güney, F., Ranjan, A., Black, M. and Geiger, A., Unsupervised Learning of Multi-Frame Optical Flow with

Anurag Ranjan 110 Nov 2, 2022