LaneAF: Robust Multi-Lane Detection with Affinity Fields
This repository contains Pytorch code for training and testing LaneAF lane detection models introduced in this paper.
Installation
- Clone this repository
- Install Anaconda
- Create a virtual environment and install all dependencies:
conda create -n laneaf pip python=3.6
source activate laneaf
pip install numpy scipy matplotlib pillow scikit-learn
pip install opencv-python
pip install https://download.pytorch.org/whl/cu101/torch-1.7.0%2Bcu101-cp36-cp36m-linux_x86_64.whl
pip install https://download.pytorch.org/whl/cu101/torchvision-0.8.1%2Bcu101-cp36-cp36m-linux_x86_64.whl
source deactivate
You can alternately find your desired torch/torchvision wheel from here.
- Clone and make DCNv2:
cd models/dla
git clone https://github.com/lbin/DCNv2.git
cd DCNv2
./make.sh
TuSimple
The entire TuSimple dataset should be downloaded and organized as follows:
└── TuSimple/
├── clips/
| └── .
| └── .
├── label_data_0313.json
├── label_data_0531.json
├── label_data_0601.json
├── test_tasks_0627.json
├── test_baseline.json
└── test_label.json
The model requires ground truth segmentation labels during training. You can generate these for the entire dataset as follows:
source activate laneaf # activate virtual environment
python datasets/tusimple.py --dataset-dir=/path/to/TuSimple/
source deactivate # exit virtual environment
Training
LaneAF models can be trained on the TuSimple dataset as follows:
source activate laneaf # activate virtual environment
python train_tusimple.py --dataset-dir=/path/to/TuSimple/ --random-transforms
source deactivate # exit virtual environment
Config files, logs, results and snapshots from running the above scripts will be stored in the LaneAF/experiments/tusimple
folder by default.
Inference
Trained LaneAF models can be run on the TuSimple test set as follows:
source activate laneaf # activate virtual environment
python infer_tusimple.py --dataset-dir=/path/to/TuSimple/ --snapshot=/path/to/trained/model/snapshot --save-viz
source deactivate # exit virtual environment
This will generate outputs in the TuSimple format and also produce benchmark metrics using their official implementation.
CULane
The entire CULane dataset should be downloaded and organized as follows:
└── CULane/
├── driver_*_*frame/
├── laneseg_label_w16/
├── laneseg_label_w16_test/
└── list/
Training
LaneAF models can be trained on the CULane dataset as follows:
source activate laneaf # activate virtual environment
python train_culane.py --dataset-dir=/path/to/CULane/ --random-transforms
source deactivate # exit virtual environment
Config files, logs, results and snapshots from running the above scripts will be stored in the LaneAF/experiments/culane
folder by default.
Inference
Trained LaneAF models can be run on the CULane test set as follows:
source activate laneaf # activate virtual environment
python infer_culane.py --dataset-dir=/path/to/CULane/ --snapshot=/path/to/trained/model/snapshot --save-viz
source deactivate # exit virtual environment
This will generate outputs in the CULane format. You can then use their official code to evaluate the model on the CULane benchmark.
Unsupervised Llamas
The Unsupervised Llamas dataset should be downloaded and organized as follows:
└── Llamas/
├── color_images/
| ├── train/
| ├── valid/
| └── test/
└── labels/
├── train/
└── valid/
Training
LaneAF models can be trained on the Llamas dataset as follows:
source activate laneaf # activate virtual environment
python train_llamas.py --dataset-dir=/path/to/Llamas/ --random-transforms
source deactivate # exit virtual environment
Config files, logs, results and snapshots from running the above scripts will be stored in the LaneAF/experiments/llamas
folder by default.
Inference
Trained LaneAF models can be run on the Llamas test set as follows:
source activate laneaf # activate virtual environment
python infer_llamas.py --dataset-dir=/path/to/Llamas/ --snapshot=/path/to/trained/model/snapshot --save-viz
source deactivate # exit virtual environment
This will generate outputs in the CULane format and Llamas format for the Lane Approximations benchmark. Note that the results produced in the Llamas format could be inaccurate because we guess the IDs of the indivudal lanes.
Pre-trained Weights
You can download our pre-trained model weights using this link.
Citation
If you find our code and/or models useful in your research, please consider citing the following papers:
@article{abualsaud2021laneaf,
title={LaneAF: Robust Multi-Lane Detection with Affinity Fields},
author={Abualsaud, Hala and Liu, Sean and Lu, David and Situ, Kenny and Rangesh, Akshay and Trivedi, Mohan M},
journal={arXiv preprint arXiv:2103.12040},
year={2021}
}