[arXiv] What-If Motion Prediction for Autonomous Driving β“πŸš—πŸ’¨

Overview

WIMP - What If Motion Predictor

Reference PyTorch Implementation for What If Motion Prediction [PDF] [Dynamic Visualizations]

Setup

Requirements

The WIMP reference implementation and setup procedure has been tested to work with Ubuntu 16.04+ and has the following requirements:

  1. python >= 3.7
  2. pytorch >= 1.5.0

Installing Dependencies

  1. Install remaining required Python dependencies using pip.

    pip install -r requirements.txt
  2. Install the Argoverse API module into the local Python environment by following steps 1, 2, and 4 in the README.

Argoverse Data

In order to set up the Argoverse dataset for training and evaluation, follow the steps below:

  1. Download the the Argoverse Motion Forecasting v1.1 dataset and extract the compressed data subsets such that the raw CSV files are stored in the following directory structure:

    β”œβ”€β”€ WIMP
    β”‚   β”œβ”€β”€ src
    β”‚   β”œβ”€β”€ scripts
    β”‚   β”œβ”€β”€ data
    β”‚   β”‚   β”œβ”€β”€ argoverse_raw
    β”‚   β”‚   β”‚   β”œβ”€β”€ train
    β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ *.csv
    β”‚   β”‚   β”‚   β”œβ”€β”€ val
    β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ *.csv
    β”‚   β”‚   β”‚   β”œβ”€β”€ test
    β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ *.csv
    
  2. Pre-process the raw Argoverse data into a WIMP-compatible format by running the following script. It should be noted that the Argoverse dataset is quite large and this script may take a few hours to run on a multi-threaded machine.

    python scripts/run_preprocess.py --dataroot ./data/argoverse_raw/ \
    --mode val --save-dir ./data/argoverse_processed --social-features \
    --map-features --xy-features --normalize --extra-map-features \
    --compute-all --generate-candidate-centerlines 6

Usage

For a detailed description of all possible configuration arguments, please run scripts with the -h flag.

Training

To train WIMP from scratch using a configuration similar to that reported in the paper, run a variant of the following command:

python src/main.py --mode train --dataroot ./data/argoverse_processed --IFC \
--lr 0.0001 --weight-decay 0.0 --non-linearity relu  --use-centerline-features \
--segment-CL-Encoder-Prob --num-mixtures 6 --output-conv --output-prediction \
--gradient-clipping --hidden-key-generator --k-value-threshold 10 \
--scheduler-step-size 60 90 120 150 180  --distributed-backend ddp \
--experiment-name example --gpus 4 --batch-size 25

Citing

If you've found this code to be useful, please consider citing our paper!

@article{khandelwal2020if,
  title={What-If Motion Prediction for Autonomous Driving},
  author={Khandelwal, Siddhesh and Qi, William and Singh, Jagjeet and Hartnett, Andrew and Ramanan, Deva},
  journal={arXiv preprint arXiv:2008.10587},
  year={2020}
}

Questions

This repo is maintained by William Qi and Siddhesh Khandelwal - please feel free to reach out or open an issue if you have additional questions/concerns.

We plan to clean up the codebase and add some additional utilities (possibly NuScenes data loaders and inference/visualization tools) in the near future, but don't expect to make significant breaking changes.

Comments
  • Pandas Error runpreprocess.py

    Pandas Error runpreprocess.py

    Hello! First of all, thank you for making your code available for the readers of your great paper. I am having an issue while running run_preprocess.py. I think while reading the csv something goes wrong since my error is a pandas error. When I try to run the script, it gives me: KeyError: 'CITY_NAME' When I go to the script and give "MIA" as the CITY_NAME, just to see what happens, I receive a similar error: KeyError: 'OBJECT_TYPE' I checked the paths for the data. It seems fine. What could be the reason? Thank you!

    opened by ahmetgurhan 0
  • Loss dimensions

    Loss dimensions

    Hi, thank you so much for your fantastic work.

    Which is the order, and the dimensions, in this function?

    def l1_ewta_loss(prediction, target, k=6, eps=1e-7, mr=2.0):
        num_mixtures = prediction.shape[1]
    
        target = target.unsqueeze(1).expand(-1, num_mixtures, -1, -1)
        l1_loss = nn.functional.l1_loss(prediction, target, reduction='none').sum(dim=[2, 3])
    
        # Get loss from top-k mixtures for each timestep
        mixture_loss_sorted, mixture_ranks = torch.sort(l1_loss, descending=False)
        mixture_loss_topk = mixture_loss_sorted.narrow(1, 0, k)
    
        # Aggregate loss across timesteps and batch
        loss = mixture_loss_topk.sum()
        loss = loss / target.size(0)
        loss = loss / target.size(2)
        loss = loss / k
        return loss
    

    I am not able to obtain good results compared to NLL. I have as inputs:

    predictions: batch_size x num_modes x pred_len x data_dim (e.g. 1024 x 6 x 30 x 2) gt: batch_size x pred_len x data_dim (e.g. 1024 x 30 x 2)

    Is this correct?

    opened by Cram3r95 0
  • Reproducing the Map-Free and only Social-Context Results form the Ablation Study

    Reproducing the Map-Free and only Social-Context Results form the Ablation Study

    Hey there,

    I want to reproduce the results of your ablation study, where you only used Social-Context with EWTA-Loss.

    image

    However, I habe problems training the model only with social context. What are the correct flags I need to set for preprocessing (run_preprocess.py) and for training (main.py)?

    Looking forward hearing from you soon!

    Best regards

    SchDevel

    opened by SchDevel 2
  • Can I get your inference/visualization code?

    Can I get your inference/visualization code?

    Hi, first of all, thanks for your awesome work and sharing that to us.

    I tried to make inference/visualization code by myself, unfortunately, there were some problems.

    Maybe library's mismatching, my insufficient coding skills, or something else.

    So, can i get your inference/visualization code or even skeleton base code?

    opened by raspbe34 3
  • What is the method for incomplete trajectories?

    What is the method for incomplete trajectories?

    Hi, thanks for sharing your great work~ I am wondering how you deal with the incomplete trajectories problem (agents have less then 2 seconds of history).

    1. I notice that for the neighboring agent wrt focal agent, you discard all the agents (code) if their trajectories are not complete
    2. how would you deal with those incomplete trajectories for the focal agent? Did you use interpolation or some techniques?

    Thanks!

    opened by XHwind 0
Releases(1.0)
Owner
William Qi
Prediction @argoai
William Qi
An unofficial personal implementation of UM-Adapt, specifically to tackle joint estimation of panoptic segmentation and depth prediction for autonomous driving datasets.

Semisupervised Multitask Learning This repository is an unofficial and slightly modified implementation of UM-Adapt[1] using PyTorch. This code primar

Abhinav Atrishi 11 Nov 25, 2022
Kaggle Lyft Motion Prediction for Autonomous Vehicles 4th place solution

Lyft Motion Prediction for Autonomous Vehicles Code for the 4th place solution of Lyft Motion Prediction for Autonomous Vehicles on Kaggle. Discussion

null 44 Jun 27, 2022
Repository to run object detection on a model trained on an autonomous driving dataset.

Autonomous Driving Object Detection on the Raspberry Pi 4 Description of Repository This repository contains code and instructions to configure the ne

Ethan 51 Nov 17, 2022
Official Repo for Ground-aware Monocular 3D Object Detection for Autonomous Driving

Visual 3D Detection Package: This repo aims to provide flexible and reproducible visual 3D detection on KITTI dataset. We expect scripts starting from

Yuxuan Liu 305 Dec 19, 2022
RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving

RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving (AAAI2021). RTS3D is efficiency and accuracy s

null 71 Nov 29, 2022
[CVPR'21] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving

TransFuser This repository contains the code for the CVPR 2021 paper Multi-Modal Fusion Transformer for End-to-End Autonomous Driving. If you find our

null 695 Jan 5, 2023
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

null 308 Jan 4, 2023
One Million Scenes for Autonomous Driving

ONCE Benchmark This is a reproduced benchmark for 3D object detection on the ONCE (One Million Scenes) dataset. The code is mainly based on OpenPCDet.

null 148 Dec 28, 2022
[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving

NEAT: Neural Attention Fields for End-to-End Autonomous Driving Paper | Supplementary | Video | Poster | Blog This repository is for the ICCV 2021 pap

null 254 Jan 2, 2023
This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.

Self Driving Car An autonomous car (also known as a driverless car, self-driving car, and robotic car) is a vehicle that is capable of sensing its env

Sagor Saha 4 Sep 4, 2021
Code repository for Semantic Terrain Classification for Off-Road Autonomous Driving

BEVNet Datasets Datasets should be put inside data/. For example, data/semantic_kitti_4class_100x100. Training BEVNet-S Example: cd experiments bash t

(Brian) JoonHo Lee 24 Dec 12, 2022
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)

RTM3D-PyTorch The PyTorch Implementation of the paper: RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving (ECCV 2020

Nguyen Mau Dzung 271 Nov 29, 2022
Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving

GSAN Introduction Code for paper GSAN: Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving, wh

YE Luyao 6 Oct 27, 2022
arxiv-sanity, but very lite, simply providing the core value proposition of the ability to tag arxiv papers of interest and have the program recommend similar papers.

arxiv-sanity, but very lite, simply providing the core value proposition of the ability to tag arxiv papers of interest and have the program recommend similar papers.

Andrej 671 Dec 31, 2022
Listing arxiv - Personalized list of today's articles from ArXiv

Personalized list of today's articles from ArXiv Print and/or send to your gmail

Lilianne Nakazono 5 Jun 17, 2022
Arxiv harvester - Poor man's simple harvester for arXiv resources

Poor man's simple harvester for arXiv resources This modest Python script takes

Patrice Lopez 5 Oct 18, 2022
(CVPR 2022) A minimalistic mapless end-to-end stack for joint perception, prediction, planning and control for self driving.

LAV Learning from All Vehicles Dian Chen, Philipp KrΓ€henbΓΌhl CVPR 2022 (also arXiV 2203.11934) This repo contains code for paper Learning from all veh

Dian Chen 300 Dec 15, 2022
This repository contains the code for the paper "Hierarchical Motion Understanding via Motion Programs"

Hierarchical Motion Understanding via Motion Programs (CVPR 2021) This repository contains the official implementation of: Hierarchical Motion Underst

Sumith Kulal 40 Dec 5, 2022