Code accompanying "Dynamic Neural Relational Inference" from CVPR 2020

Overview

Code accompanying "Dynamic Neural Relational Inference"

This codebase accompanies the paper "Dynamic Neural Relational Inference" from CVPR 2020.

This code was written using the following packages:

  • PyTorch 1.2.0
  • numpy 1.16.4
  • transforms3d 0.3.1 (For Motion Capture data processing)
  • pandas (for InD data processing)

To run this code, you should pip install it in editable mode. This can be done using the following command:

pip install -e ./

Scripts train models can be found in the run_scripts directory.

Datasets:

Attribution: Some portions of this code are based on the code for the paper "Neural Relational Inference for Interacting Systems." This code can be found at https://github.com/ethanfetaya/NRI

If you use this code or this model in your work, please cite us:

@inproceedings{dNRI,
  title={Dynamic Neural Relational Inference},
  author={Graber, Colin and Schwing, Alexander},
  booktitle={The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020},
}
Comments
  • Basketball dataset preprocessing

    Basketball dataset preprocessing

    Hello Colin,

    I downloaded the Basketball dataset from AWS exchange as documented here: https://github.com/ezhan94/multiagent-programmatic-supervision. However, the dataset seems to only contain a train.npz and a test.npz file, which are not the files that you use in your experiments. Is there an extra preprocessing step involved that I missed?

    Thank you in advance.

    opened by mkofinas 5
  • InD dataset results

    InD dataset results

    Hello Colin,

    Thank you for releasing the source code of this excellent work. I have some trouble replicating your results for the InD traffic trajectory dataset. I am running the default script you provide to train and evaluate the model and using the results from eval_results_test_last_driver5burnin.txt. The error that I get from these files is ~0.23 at 40 steps (averaged across 5 seeds). However, in your figures, the reported number is ~0.025. Do the reported numbers come from a different experimental setting and if so, how can I access them?

    Thank you in advance.

    opened by mkofinas 4
  • negative loss

    negative loss

    Hi Colin,

    Thanks for sharing the code of the excellent work.

    When I run the code on motion-118, I notice that the output val loss is negative value (-0.56...). This is caused by the reconstruction loss (nll_gaussian in dnri.py). The normalize_nll is defaultly set to be True so the negative constant is added with the reconstruction loss.

    It is not reasonable for me if any loss if negative. I thought it might be caused by different versions of pytorch. Just confirm whether you got negative loss in your result.

    Thanks.

    opened by TianyuanYu 3
  • Request Basketball Dataset

    Request Basketball Dataset

    Hi, Colin. This is Yandong. I am really impressed with your work. Is that possible to get the same Basketball Dataset from you? It's only for academic purposes. I found this the original dataset from STATS is not available.

    opened by xiaoyandong08 2
  • reverse_rnn in dnri_dynamicvars.py

    reverse_rnn in dnri_dynamicvars.py

    Hi Colin,

    Thanks for releasing the code of this wonderful work. I was going through dnri_dynamicvars.py and have trouble finding where the reverse LSTM in equation 14 of the paper is implemented. I noticed that DNRI_DynamicVars_Encoder.reverse_rnn is a recurrent layer, but it seems like it isn't used anywhere. Thanks in advance for your help!

    opened by tiffanyyk 1
  • Sparsity of interaction network inferred by dNRI

    Sparsity of interaction network inferred by dNRI

    Dear author,

    Thanks for your wonderful work.

    After executing the dNRI of 'run_motion_118.sh', I found the edges don't have zero element. How could I yield the sparsity (no interaction) of interaction networks?

    Besides, what does '2 edge types' or '4 edge types' mean in CMU motion data?

    opened by xiaoyandong08 1
  • symmetry in edges and their order

    symmetry in edges and their order

    Hi,

    I have a few trivial questions:

    • are the edges in the graph not undirectional, i.e. the edge type for node $i$ to node $j$ could be different than the edge type from node $j$ to node $i$? If that is the case how can we impose symmetry on the connectivity matrix? (for example the same edge type from node $i$ to node $j$ and vice versa)
    • how are the edges ordered? to be precise, for example after this line the output is just a 2D vector, whose rows are the one hot encoding of an edge type, with $(N*N - N)$ number of rows, which is due to the fact there is no connection between the node onto itself (please correct me if I'm wrong in any of those statements). But how could one know if for example the $n$-th row is representing the connection between node 1 to node 0 or node 0 to node 1? In other words, if one were to create an adjacency matrix whose elements represent the edge type, e.g.

    $$\begin{bmatrix} 0 & 0 & 1\ 2 & 0 & 0\ 3 & 1 & 0 \end{bmatrix},$$

    where ${ 0,1,2,3 }$ indicate the edge types. How could that be done using the edges from self.single_step_forward.

    Thank you very much in advance.

    opened by pi-a 0
  • Edge Formats

    Edge Formats

    Hello,

    I've noticed that the edges generated by the NRI paper are in a symmetric matrix format, whereas the edges expected by dNRI are in a vector format. Likewise, it seems that -- for n particles -- the vector is of length n * (n-1), which makes sense since each particle's relation to itself is a given. That being the case, if I flatten the matrix (excluding the diagonal values) and pass that in as the vector, the edge predictions are always about a 50-50 guess. Any help would be greatly appreciated.

    Thanks.

    opened by yishaiSilver 5
Owner
Colin Graber
Colin Graber
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Facebook Research 296 Dec 29, 2022
Code accompanying "Learning What To Do by Simulating the Past", ICLR 2021.

Learning What To Do by Simulating the Past This repository contains code that implements the Deep Reward Learning by Simulating the Past (Deep RSLP) a

Center for Human-Compatible AI 24 Aug 7, 2021
Code accompanying our paper Feature Learning in Infinite-Width Neural Networks

Empirical Experiments in "Feature Learning in Infinite-width Neural Networks" This repo contains code to replicate our experiments (Word2Vec, MAML) in

Edward Hu 37 Dec 14, 2022
Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://arxiv.org/abs/2103.06332).

Hurdles to Progress in Long-form Question Answering This repository contains the official scripts and datasets accompanying our NAACL 2021 paper, "Hur

Kalpesh Krishna 41 Nov 8, 2022
This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21

Deep Virtual Markers This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21 Getting Started Get sa

KimHyomin 45 Oct 7, 2022
Code accompanying the paper "Wasserstein GAN"

Wasserstein GAN Code accompanying the paper "Wasserstein GAN" A few notes The first time running on the LSUN dataset it can take a long time (up to an

null 3.1k Jan 1, 2023
PyTorch code accompanying our paper on Maximum Entropy Generators for Energy-Based Models

Maximum Entropy Generators for Energy-Based Models All experiments have tensorboard visualizations for samples / density / train curves etc. To run th

Rithesh Kumar 135 Oct 27, 2022
Code accompanying the paper "How Tight Can PAC-Bayes be in the Small Data Regime?"

How Tight Can PAC-Bayes be in the Small Data Regime? This is the code to reproduce all experiments for the following paper: @inproceedings{Foong:2021:

null 5 Dec 21, 2021
Code repository accompanying the paper "On Adversarial Robustness: A Neural Architecture Search perspective"

On Adversarial Robustness: A Neural Architecture Search perspective Preparation: Clone the repository: https://github.com/tdchaitanya/nas-robustness.g

Chaitanya Devaguptapu 4 Nov 10, 2022
Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face Manipulation" published in CVPR 2020.

FFD Source Code Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face M

null 88 Nov 22, 2022
Official code for "End-to-End Optimization of Scene Layout" -- including VAE, Diff Render, SPADE for colorization (CVPR 2020 Oral)

End-to-End Optimization of Scene Layout Code release for: End-to-End Optimization of Scene Layout CVPR 2020 (Oral) Project site, Bibtex For help conta

Andrew Luo 41 Dec 9, 2022
Official PyTorch code for CVPR 2020 paper "Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision"

Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision https://arxiv.org/abs/2003.00393 Abstract Active learning (AL) aims to min

Denis 29 Nov 21, 2022
This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"

Differentiable Volumetric Rendering Paper | Supplementary | Spotlight Video | Blog Entry | Presentation | Interactive Slides | Project Page This repos

null 697 Jan 6, 2023
Source code for CVPR 2020 paper "Learning to Forget for Meta-Learning"

L2F - Learning to Forget for Meta-Learning Sungyong Baik, Seokil Hong, Kyoung Mu Lee Source code for CVPR 2020 paper "Learning to Forget for Meta-Lear

Sungyong Baik 29 May 22, 2022
UDP++ (ECCVW 2020 Oral), (Winner of COCO 2020 Keypoint Challenge).

UDP-Pose This is the pytorch implementation for UDP++, which won the Fisrt place in COCO Keypoint Challenge at ECCV 2020 Workshop. Top-Down Results on

null 20 Jul 29, 2022
Collection of NLP model explanations and accompanying analysis tools

Thermostat is a large collection of NLP model explanations and accompanying analysis tools. Combines explainability methods from the captum library wi

null 126 Nov 22, 2022
Datasets accompanying the paper ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers.

ConditionalQA Datasets accompanying the paper ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers. Disclaimer This dataset

null 2 Oct 14, 2021
[CVPR 2022] CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation

CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation Prerequisite Please create and activate the following conda envrionment. To r

Qin Wang 87 Jan 8, 2023