DG-TrajGen
The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022.
Our Method
- Trajectory representation:
- Model: ./learning/model.py/Generator
- Latent Action Space Learning:
- Generator model: ./learning/model.py/Generator
- Discriminator model: ./learning/model.py/Discriminator
- Training: ./scripts/Ours/stage1_train_GAN.py
- Encoder Pre-training:
- Training: ./scripts/Ours/stage2_pretrain_encoder.py
- End-to-End Training the Encoder:
- Training: ./scripts/Ours/stage3_train_e2e.py
Comparative Study
- RIP:
- MixStyle:
- DIVA:
- DAL:
- Training: ./scripts/DAL/train.py
- E2E NT:
Closed-loop Experiments:
We train the model on the Oxford RobotCar dataset and directly generalize it to the CARLA simulation.
- Run: ./scripts/CARLA/run_ours.py
Citation
If you use our source code, please consider citing the following:
@article{wang2021domain,
title={Domain Generalization for Vision-based Driving Trajectory Generation},
author={Wang, Yunkai and Zhang, Dongkun and Cui, Yuxiang and Chen, Zexi and Jing, Wei and Chen, Junbo and Xiong, Rong and Wang, Yue},
journal={arXiv preprint arXiv:2109.13858},
year={2021}
}