The official PyTorch code implementation of "Personalized Trajectory Prediction via Distribution Discrimination" in ICCV 2021.

Related tags

Deep Learning DisDis
Overview

Personalized Trajectory Prediction via Distribution Discrimination (DisDis)

The official PyTorch code implementation of "Personalized Trajectory Prediction via Distribution Discrimination" in ICCV 2021,arxiv.

Introduction

The motivation of DisDis is to learn the latent distribution to represent different motion patterns, where the motion pattern of each person is personalized due to his/her habit. We learn the distribution discriminator in a self-supervised manner, which encourages the latent variable distributions of the same motion pattern to be similar while pushing the ones of the different motion patterns away. DisDis is a plug-and-play module which could be integrated with existing multi-modal stochastic predictive models to enhance the discriminative ability of latent distribution. Besides, we propose a new evaluation metric for stochastic trajectory prediction methods. We calculate the probability cumulative minimum distance (PCMD) curve to comprehensively and stably evaluate the learned model and latent distribution, which cumulatively selects the minimum distance between sampled trajectories and ground-truth trajectories from high probability to low probability. PCMD considers the predictions with corresponding probabilities and evaluates the prediction model under the whole latent distribution.

image Figure 1. Training process for the DisDis method. DisDis regards the latent distribution as the motion pattern and optimizes the trajectories with the same motion pattern to be close while the ones with different patterns are pushed away, where the same latent distributions are in the same color. For a given history trajectory, DisDis predicts a latent distribution as the motion pattern, and takes the latent distribution as the discrimination to jointly optimize the embeddings of trajectories and latent distributions.

Requirements

  • Python 3.6+
  • PyTorch 1.4

To build all the dependency, you can follow the instruction below.

pip install -r requirements.txt

Our code is based on Trajectron++. Please cite it if it's useful.

Dataset

The preprocessed data splits for the ETH and UCY datasets are in experiments/pedestrians/raw/. Before training and evaluation, execute the following to process the data. This will generate .pkl files in experiments/processed.

cd experiments/pedestrians
python process_data.py

The train/validation/test/ splits are the same as those found in Social GAN.

Model training

You can train the model for zara1 dataset as

python train.py --eval_every 10 --vis_every 1 --train_data_dict zara1_train.pkl --eval_data_dict zara1_val.pkl --offline_scene_graph yes --preprocess_workers 2 --log_dir ../experiments/pedestrians/models --log_tag _zara1_disdis --train_epochs 100 --augment --conf ../experiments/pedestrians/models/config/config_zara1.json --device cuda:0

The pre-trained models can be found in experiments/pedestrians/models/. And the model configuration is in experiments/pedestrians/models/config/.

Model evaluation

To reproduce the PCMD results in Table 1, you can use

python evaluate_pcmd.py --node_type PEDESTRIAN --data ../processed/zara1_test.pkl --model models/zara1_pretrain --checkpoint 100

To use the most-likely strategy, you can use

python evaluate_mostlikely_z.py --node_type PEDESTRIAN --data ../processed/zara1_test.pkl --model models/zara1_pretrain --checkpoint 100

Welcome to use our PCMD evaluation metric in your experiments. It is a more comprehensive and stable evaluation metric for stochastic trajectory prediction methods.

Citation

The bibtex of our paper 'Personalized Trajectory Prediction via Distribution Discrimination' is provided below:

@inproceedings{Disdis,
  title={Personalized Trajectory Prediction via Distribution Discrimination},
  author={Chen, Guangyi and Li, Junlong and Zhou, Nuoxing and Ren, Liangliang and Lu, Jiwen},
  booktitle={ICCV},
  year={2021}
}
You might also like...
(ICCV 2021) Official code of
(ICCV 2021) Official code of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing."

Dressing in Order (DiOr) 👚 [Paper] 👖 [Webpage] 👗 [Running this code] The official implementation of "Dressing in Order: Recurrent Person Image Gene

Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.
Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

Official code release for
Official code release for "Learned Spatial Representations for Few-shot Talking-Head Synthesis" ICCV 2021

Official code release for "Learned Spatial Representations for Few-shot Talking-Head Synthesis" ICCV 2021

Official implementation of NPMs: Neural Parametric Models for 3D Deformable Shapes - ICCV 2021
Official implementation of NPMs: Neural Parametric Models for 3D Deformable Shapes - ICCV 2021

NPMs: Neural Parametric Models Project Page | Paper | ArXiv | Video NPMs: Neural Parametric Models for 3D Deformable Shapes Pablo Palafox, Aljaz Bozic

Official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.
Official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.

Vision Transformer with Progressive Sampling This is the official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.

 Official implementation of the ICCV 2021 paper
Official implementation of the ICCV 2021 paper "Conditional DETR for Fast Training Convergence".

The DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings and that the spatial embeddings make minor contributions, increasing the need for high-quality content embeddings and thus increasing the training difficulty.

The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.
The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.

SCOOD-UDG (ICCV 2021) This repository is the official implementation of the paper: Semantically Coherent Out-of-Distribution Detection Jingkang Yang,

Official implementation of the ICCV 2021 paper:
Official implementation of the ICCV 2021 paper: "The Power of Points for Modeling Humans in Clothing".

The Power of Points for Modeling Humans in Clothing (ICCV 2021) This repository contains the official PyTorch implementation of the ICCV 2021 paper: T

Official implementation of the ICCV 2021 paper
Official implementation of the ICCV 2021 paper "Joint Inductive and Transductive Learning for Video Object Segmentation"

JOINT This is the official implementation of Joint Inductive and Transductive learning for Video Object Segmentation, to appear in ICCV 2021. @inproce

Comments
  • Issues in dataset preprocessing and ADE/FDE results

    Issues in dataset preprocessing and ADE/FDE results

    Hi @CHENGY12

    Thank you for your great work. I was able to set up and run code quite quickly!

    While reproducing results with your evaluation code using pre-processed dataset pickle file from Trajectron++, I noticed that the results were somewhat different.

    I tried to analyze the cause of this, and I found that data_utils.py in your code is based on the eccv2020 branch of trajectron++. In that branch, there was a critical issue in data pre-processing that future two GT coordinates were additionally used while calculating the acceleration of the last observation points. Many issues have been raised in Trajectron++ regarding this problem (issues #26,#40 and #53).

    https://github.com/CHENGY12/DisDis/blob/55188a5c26230491bdeb3782a571ca45388861fa/trajectron/environment/data_utils.py#L27

    Fortunately, this issue was solved in the Trajectron++ code. Below are my PCMD results after fixing it and re-training the DisDis model:

    | DisDis | ETH | HOTEL | ZARA1 | ZARA2 | UNIV | AVG | |----------|----------------|----------------|----------------|----------------|----------------|----------------| | PCMD_ADE | 1.02/0.85/0.61 | 0.42/0.25/0.15 | 0.43/0.31/0.19 | 0.34/0.25/0.15 | 0.54/0.40/0.26 | 0.55/0.41/0.27 | | PCMD_FDE | 2.13/1.72/1.16 | 0.80/0.46/0.23 | 0.96/0.64/0.35 | 0.77/0.55/0.29 | 1.20/0.86/0.51 | 1.17/0.85/0.51 |

    Note that DisDis still performs much better than other comparable models (including reproduced Trajectron++). Also, I did not edit the config file, so there is room for better results when modifying the hyper-parameters.

    As your paper compared the DisDis with other SOTA models which use Social-GAN data-loader (Social-GAN, STGAT, Social-STGCNN), I think the authors should fix this issue and update the numbers in the arXiv paper for a fair comparison.

    Thank you.

    opened by InhwanBae 4
  • Question about visualization in your paper?

    Question about visualization in your paper?

    Hi,

    Thanks for your good job. I want to ask how to plot the trajectories in scenes in your paper fig5 and how to find which frame corresponds to the scene at this time.?

    I am very confused about the visualization about fig5. Can you tell me how to draw this kind of picture? Hope you can help me solve the questions.

    Best, Jincan

    opened by Xiejc97 1
Owner
null
An official implementation of "Exploiting a Joint Embedding Space for Generalized Zero-Shot Semantic Segmentation" (ICCV 2021) in PyTorch.

Exploiting a Joint Embedding Space for Generalized Zero-Shot Semantic Segmentation This is an official implementation of the paper "Exploiting a Joint

CV Lab @ Yonsei University 35 Oct 26, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page >> coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page >> coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
[ICCV 2021] Official PyTorch implementation for Deep Relational Metric Learning.

Deep Relational Metric Learning This repository is the official PyTorch implementation of Deep Relational Metric Learning. Framework Datasets CUB-200-

Borui Zhang 39 Dec 10, 2022
official Pytorch implementation of ICCV 2021 paper FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting.

FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting By Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu

null 77 Dec 27, 2022
Official pytorch implementation of "Scaling-up Disentanglement for Image Translation", ICCV 2021.

Official pytorch implementation of "Scaling-up Disentanglement for Image Translation", ICCV 2021.

Aviv Gabbay 41 Nov 29, 2022
Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic Scenes", ICCV 2021.

Deep 3D Mask Volume for View Synthesis of Dynamic Scenes Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic S

Ken Lin 17 Oct 12, 2022
This is the official pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering" on VQA Task

?? ERASOR (RA-L'21 with ICRA Option) Official page of "ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object Removal for Static 3D Point C

Hyungtae Lim 225 Dec 29, 2022
Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras (ICCV 2021)

N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Gra

null 32 Dec 26, 2022
Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021

ACTOR Official Pytorch implementation of the paper "Action-Conditioned 3D Human Motion Synthesis with Transformer VAE", ICCV 2021. Please visit our we

Mathis Petrovich 248 Dec 23, 2022