Code for the ECCV2020 paper "A Differentiable Recurrent Surface for Asynchronous Event-Based Data"

Overview

A Differentiable Recurrent Surface for Asynchronous Event-Based Data

Code for the ECCV2020 paper "A Differentiable Recurrent Surface for Asynchronous Event-Based Data"
Authors: Marco Cannici, Marco Ciccone, Andrea Romanoni, Matteo Matteucci

Citing:

If you use Matrix-LSTM for research, please cite our accompanying ECCV2020 paper:

@InProceedings{Cannici_2020_ECCV,
    author = {Cannici, Marco and Ciccone, Marco and Romanoni, Andrea and Matteucci, Matteo},
    title = {A Differentiable Recurrent Surface for Asynchronous Event-Based Data},
    booktitle = {The European Conference on Computer Vision (ECCV)},
    month = {August},
    year = {2020}
}

Project Structure

The code is organized in two folders:

  • classification/ containing PyTorch code for N-Cars and N-Caltech101 experiments
  • opticalflow/ containing TensorFlow code for MVSEC experiments (code based on EV-FlowNet repository)

Note: the naming convention used within the code is not exactly the same as the one used in the paper. In particular, the groupByPixel operation is named group_rf_bounded in the code (i.e., group by receptive field, since it also supports receptive fields larger than 1x1), while the groupByTime operation is named intervals_to_batch.

Requirements

We provide a Dockerfile for both codebases in order to replicate the environments we used to run the paper experiments. In order to build and run the containers, the following packages are required:

  • Docker CE - version 18.09.0 (build 4d60db4)
  • NVIDIA Docker - version 2.0

If you have installed the latest version, you may need to modify the .sh files substituting:

  • nvidia-docker run with docker run
  • --runtime=nvidia with --gpus=all

You can verify which command works for you by running:

  • (scripts default) nvidia-docker run -ti --rm --runtime=nvidia -t nvidia/cuda:10.0-cudnn7-devel-ubuntu16.04 nvidia-smi
  • docker run -ti --rm --gpus=all -t nvidia/cuda:10.0-cudnn7-devel-ubuntu16.04 nvidia-smi

You should be able to see the output of nvidia-smi

Run Experiments

Details on how to run experiments are provided in separate README files contained in the classification and optical flow sub-folders:

Note: using Docker is not mandatory, but it will allow you to automate the process of installing dependencies and building CUDA kernels, all within a safe environment that won't modify any of your previous installations. Please, read the Dockerfile and requirements.yml files contained inside the <classification or opticalflow>/docker/ subfolders if you want to perform a standard conda/pip installation (you just need to manually run all RUN commands).

You might also like...
Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166
Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166

Region Proportion Regularized Inference (RePRI) for Few-Shot Segmentation In this repo, we provide the code for our paper : "Few-Shot Segmentation Wit

Code for ACM MM 2020 paper
Code for ACM MM 2020 paper "NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination"

NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination The offical implementation for the "NOH-NMS: Improving Pedestrian Detection by

Official TensorFlow code for the forthcoming paper
Official TensorFlow code for the forthcoming paper

~ Efficient-CapsNet ~ Are you tired of over inflated and overused convolutional neural networks? You're right! It's time for CAPSULES :)

This is the code for the paper
This is the code for the paper "Contrastive Clustering" (AAAI 2021)

Contrastive Clustering (CC) This is the code for the paper "Contrastive Clustering" (AAAI 2021) Dependency python=3.7 pytorch=1.6.0 torchvision=0.8

Code for the paper Learning the Predictability of the Future

Learning the Predictability of the Future Code from the paper Learning the Predictability of the Future. Website of the project in hyperfuture.cs.colu

PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning This is the PyTorch implementation of our paper: FeatMatch: Feature-Based Augmentat

Code for the paper A Theoretical Analysis of the Repetition Problem in Text Generation
Code for the paper A Theoretical Analysis of the Repetition Problem in Text Generation

A Theoretical Analysis of the Repetition Problem in Text Generation This repository share the code for the paper "A Theoretical Analysis of the Repeti

Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks
Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks

SA-Net: Shuffle Attention for Deep Convolutional Neural Networks (paper) By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software T

Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Comments
  • TypeError: unsupported operand type(s) for -: 'tuple' and 'int'

    TypeError: unsupported operand type(s) for -: 'tuple' and 'int'

    I am having an error when launching this small script :

    import torch
    from layers.MatrixConvLSTM import MatrixConvLSTM
    
    def test_dummy_events(batch_size=4, height=32, width=32, tmax=100000, num_event_max=512):
    
        #dummy batch
        events = torch.zeros((batch_size, num_event_max, 4), dtype=torch.int32)
        lengths = torch.randint(32, num_event_max, (batch_size,))
        events[...,0] = torch.randint(0, width, (batch_size, num_event_max))
        events[...,1] = torch.randint(0, height, (batch_size, num_event_max))
        events[...,2] = torch.randint(0, tmax, (batch_size, num_event_max))
        events[...,3] = torch.randint(0, 2, (batch_size, num_event_max))
    
        embedding_size = 0
        use_embedding = False
        matrix_lstm_type = "ConvLSTM"
        matrix_input_size = embedding_size + 1 if use_embedding else 1
        lstm_num_layers = 1
        input_shape = (None, None)
        matrix_input_size = 1
        matrix_hidden_size = 8
        matrix_region_stride = (1,1)
        matrix_region_shape = (1,1)  # receptive-field
        matrix_add_coords_feature = False
        matrix_add_time_feature_mode = "delay_norm"
        matrix_normalize_relative = True
        matrix_keep_most_recent = False
        matrix_frame_intervals = 1
        matrix_frame_intervals_mode = None
    
        MatrixLSTMClass = MatrixConvLSTM if matrix_lstm_type == "ConvLSTM" else MatrixLSTM
        layer = MatrixLSTMClass(input_shape,
                                          matrix_region_shape,
                                          matrix_region_stride, matrix_input_size,
                                          matrix_hidden_size, lstm_num_layers,
                                          bias=True, lstm_type=matrix_lstm_type,
                                          add_coords_feature=matrix_add_coords_feature,
                                          add_time_feature_mode=matrix_add_time_feature_mode,
                                          normalize_relative=matrix_normalize_relative,
                                          keep_most_recent=matrix_keep_most_recent,
                                          frame_intervals=matrix_frame_intervals,
                                          frame_intervals_mode=matrix_frame_intervals_mode)
    
        output_dense = layer(events, lengths)
        print(output_dense.shape)
    
    if __name__ == '__main__':
        import fire
        fire.Fire(test_dummy_events)
    
    

    Error:

    Traceback (most recent call last):
      File "scripts/test_matrixlstm.py", line 73, in <module>
        fire.Fire(test_dummy_events)
      File "/usr/local/lib/python3.6/dist-packages/fire/core.py", line 138, in Fire
        component_trace = _Fire(component, args, parsed_flag_args, context, name)
      File "/usr/local/lib/python3.6/dist-packages/fire/core.py", line 471, in _Fire
        target=component.__name__)
      File "/usr/local/lib/python3.6/dist-packages/fire/core.py", line 675, in _CallAndUpdateTrace
        component = fn(*varargs, **kwargs)
      File "scripts/test_matrixlstm.py", line 60, in test_dummy_events
        frame_intervals_mode=matrix_frame_intervals_mode)
      File "/home/etienneperot/workspace/matrixlstm/classification/layers/MatrixConvLSTM.py", line 54, in
     __init__
        num_layers=self.num_layers, batch_first=False)
      File "/home/etienneperot/workspace/matrixlstm/classification/libs/lstms/convlstm.py", line 409, in
    __init__
        forget_bias=self.forget_bias[i]))
      File "/home/etienneperot/workspace/matrixlstm/classification/libs/lstms/convlstm.py", line 117, in
    __init__
        self.Wi_padding = (self.Wi_kernel - 1) // 2, (self.Wi_kernel - 1) // 2
    TypeError: unsupported operand type(s) for -: 'tuple' and 'int'
    

    Probably in "libs/conv_lstm.py" you want to change lines 116 & 118 to:

    116 self.Wi_padding = (self.Wi_kernel[0] - 1) // 2, (self.Wi_kernel[1] - 1) // 2
    118 self.Wh_padding = (self.Wh_kernel[0] - 1) // 2, (self.Wh_kernel[1] - 1) // 2
    
    opened by etienne87 2
  • How to set up multiple GPU training without using docker

    How to set up multiple GPU training without using docker

    @marcocannici Thanks for your outstanding work.

    I used conda to create a new environment on my computer. The code can run normally, but I found that the speed is slow. So I set to use multiple GPUs (0 ,1) in the parameters, but in the end only the last GPU is working.

    I run the following command under the classification folder.

    python scripts/train_matrixlstm_resnet_decay.py 0,1 -c configs/xxx.yaml

    Maybe there is a problem with my parameter passing? It is also possible that multiple GPUs cannot be used for training without docker?

    Due to my lack of knowledge, I take the liberty to ask you how to train with multiple GPUs.

    Looking forward to your reply.

    opened by midofalasol 0
  • Pretrained model for classification and optical flow

    Pretrained model for classification and optical flow

    @marcocannici

    Thanks for outsourcing your work. Can you kindly outsource pretrained models on classification/optical flow so that atleast we can see outputs.

    opened by chowkamlee81 0
  • about HOTs

    about HOTs

    Hi, thanks for your great work! The time surface is computed in a fixed neighborhood ((2R+1) x (2R+1)) in the original paper (HOTs). What is the neighborhood in your work? Is it the whole image?

    opened by Jee-King 2
Owner
Marco Cannici
Marco Cannici
Official implementation of "Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision" ECCV2020

XDVioDet Official implementation of "Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision" ECCV2020. The proj

peng 64 Dec 12, 2022
Evaluation Pipeline for our ECCV2020: Journey Towards Tiny Perceptual Super-Resolution.

Journey Towards Tiny Perceptual Super-Resolution Test code for our ECCV2020 paper: https://arxiv.org/abs/2007.04356 Our x4 upscaling pre-trained model

Royson 6 Mar 30, 2022
PyTorch reimplementation of hand-biomechanical-constraints (ECCV2020)

Hand Biomechanical Constraints Pytorch Unofficial PyTorch reimplementation of Hand-Biomechanical-Constraints (ECCV2020). This project reimplement foll

Hao Meng 59 Dec 20, 2022
The LaTeX and Python code for generating the paper, experiments' results and visualizations reported in each paper is available (whenever possible) in the paper's directory

This repository contains the software implementation of most algorithms used or developed in my research. The LaTeX and Python code for generating the

João Fonseca 3 Jan 3, 2023
Inference code for "StylePeople: A Generative Model of Fullbody Human Avatars" paper. This code is for the part of the paper describing video-based avatars.

NeuralTextures This is repository with inference code for paper "StylePeople: A Generative Model of Fullbody Human Avatars" (CVPR21). This code is for

Visual Understanding Lab @ Samsung AI Center Moscow 18 Oct 6, 2022
This is the official source code for SLATE. We provide the code for the model, the training code, and a dataset loader for the 3D Shapes dataset. This code is implemented in Pytorch.

SLATE This is the official source code for SLATE. We provide the code for the model, the training code and a dataset loader for the 3D Shapes dataset.

Gautam Singh 66 Dec 26, 2022
Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.

Who Left the Dogs Out? Evaluation and demo code for our ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization

Benjamin Biggs 29 Dec 28, 2022
TensorFlow code for the neural network presented in the paper: "Structural Language Models of Code" (ICML'2020)

SLM: Structural Language Models of Code This is an official implementation of the model described in: "Structural Language Models of Code" [PDF] To ap

null 73 Nov 6, 2022
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
Code to use Augmented Shapiro Wilks Stopping, as well as code for the paper "Statistically Signifigant Stopping of Neural Network Training"

This codebase is being actively maintained, please create and issue if you have issues using it Basics All data files are included under losses and ea

J K Terry 32 Nov 9, 2021