LaneAF: Robust Multi-Lane Detection with Affinity Fields

Related tags

Deep Learning LaneAF
Overview

PWC

PWC

LaneAF: Robust Multi-Lane Detection with Affinity Fields

This repository contains Pytorch code for training and testing LaneAF lane detection models introduced in this paper.

Installation

  1. Clone this repository
  2. Install Anaconda
  3. Create a virtual environment and install all dependencies:
conda create -n laneaf pip python=3.6
source activate laneaf
pip install numpy scipy matplotlib pillow scikit-learn
pip install opencv-python
pip install https://download.pytorch.org/whl/cu101/torch-1.7.0%2Bcu101-cp36-cp36m-linux_x86_64.whl
pip install https://download.pytorch.org/whl/cu101/torchvision-0.8.1%2Bcu101-cp36-cp36m-linux_x86_64.whl
source deactivate

You can alternately find your desired torch/torchvision wheel from here.

  1. Clone and make DCNv2:
cd models/dla
git clone https://github.com/lbin/DCNv2.git
cd DCNv2
./make.sh

TuSimple

The entire TuSimple dataset should be downloaded and organized as follows:

└── TuSimple/
    ├── clips/
    |   └── .
    |   └── .
    ├── label_data_0313.json
    ├── label_data_0531.json
    ├── label_data_0601.json
    ├── test_tasks_0627.json
    ├── test_baseline.json
    └── test_label.json

The model requires ground truth segmentation labels during training. You can generate these for the entire dataset as follows:

source activate laneaf # activate virtual environment
python datasets/tusimple.py --dataset-dir=/path/to/TuSimple/
source deactivate # exit virtual environment

Training

LaneAF models can be trained on the TuSimple dataset as follows:

source activate laneaf # activate virtual environment
python train_tusimple.py --dataset-dir=/path/to/TuSimple/ --random-transforms
source deactivate # exit virtual environment

Config files, logs, results and snapshots from running the above scripts will be stored in the LaneAF/experiments/tusimple folder by default.

Inference

Trained LaneAF models can be run on the TuSimple test set as follows:

source activate laneaf # activate virtual environment
python infer_tusimple.py --dataset-dir=/path/to/TuSimple/ --snapshot=/path/to/trained/model/snapshot --save-viz
source deactivate # exit virtual environment

This will generate outputs in the TuSimple format and also produce benchmark metrics using their official implementation.

CULane

The entire CULane dataset should be downloaded and organized as follows:

└── CULane/
    ├── driver_*_*frame/
    ├── laneseg_label_w16/
    ├── laneseg_label_w16_test/
    └── list/

Training

LaneAF models can be trained on the CULane dataset as follows:

source activate laneaf # activate virtual environment
python train_culane.py --dataset-dir=/path/to/CULane/ --random-transforms
source deactivate # exit virtual environment

Config files, logs, results and snapshots from running the above scripts will be stored in the LaneAF/experiments/culane folder by default.

Inference

Trained LaneAF models can be run on the CULane test set as follows:

source activate laneaf # activate virtual environment
python infer_culane.py --dataset-dir=/path/to/CULane/ --snapshot=/path/to/trained/model/snapshot --save-viz
source deactivate # exit virtual environment

This will generate outputs in the CULane format. You can then use their official code to evaluate the model on the CULane benchmark.

Unsupervised Llamas

The Unsupervised Llamas dataset should be downloaded and organized as follows:

└── Llamas/
    ├── color_images/
    |   ├── train/
    |   ├── valid/
    |   └── test/
    └── labels/
        ├── train/
        └── valid/

Training

LaneAF models can be trained on the Llamas dataset as follows:

source activate laneaf # activate virtual environment
python train_llamas.py --dataset-dir=/path/to/Llamas/ --random-transforms
source deactivate # exit virtual environment

Config files, logs, results and snapshots from running the above scripts will be stored in the LaneAF/experiments/llamas folder by default.

Inference

Trained LaneAF models can be run on the Llamas test set as follows:

source activate laneaf # activate virtual environment
python infer_llamas.py --dataset-dir=/path/to/Llamas/ --snapshot=/path/to/trained/model/snapshot --save-viz
source deactivate # exit virtual environment

This will generate outputs in the CULane format and Llamas format for the Lane Approximations benchmark. Note that the results produced in the Llamas format could be inaccurate because we guess the IDs of the indivudal lanes.

Pre-trained Weights

You can download our pre-trained model weights using this link.

Citation

If you find our code and/or models useful in your research, please consider citing the following papers:

@article{abualsaud2021laneaf,
title={LaneAF: Robust Multi-Lane Detection with Affinity Fields},
author={Abualsaud, Hala and Liu, Sean and Lu, David and Situ, Kenny and Rangesh, Akshay and Trivedi, Mohan M},
journal={arXiv preprint arXiv:2103.12040},
year={2021}
}
Comments
  • how to test on images?

    how to test on images?

    hi! thanks for your great work! I'm troubled with how to test on images. here are my codes for testing on images. image and I have tried like this. image first it will download .pth image and then return error: image so,how can I solve this problem? please help! thanks.

    opened by lucky-xu-1994 7
  • How to get the lane ID from output?

    How to get the lane ID from output?

    Hi, thank you for your great work! It did well on my own datasets! but I have trouble with getting the lane ID directly from seg_out. It seems to need a label.png to get the lane ID like in CUlane. So, how can I directly get the lane ID? please give me some advise, thank you very much!

    opened by lucky-xu-1994 6
  • Can laneaf detect the horizontal lane?

    Can laneaf detect the horizontal lane?

    Thanks for your great work! LaneAF clustering lane pixels row-by-row, from bottom to top. Can LaneAF detect the vertical lane and horizontal lane at the same time? like the picture below,thanks

    微信截图_20210429135239

    opened by guoguangchao 5
  • Training error

    Training error

    When i try to train, i get error like this: image

    I run this code in google colab, and i use torch-1.7.0+cu101 torchvision-0.8.1+cu101, cuda 10.1. Could you help me solve this problem ? thank you!

    opened by Haithienbao 5
  • make DCNv2 error

    make DCNv2 error

    Use: Colab Inputs:!sh make.sh Outputs:

    running build
    running build_ext
    /usr/local/lib/python3.7/dist-packages/torch/utils/cpp_extension.py:369: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
      warnings.warn(msg.format('we could not find ninja.'))
    building '_ext' extension
    g++ -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-OGiuun/python3.7-3.7.10=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-OGiuun/python3.7-3.7.10=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/content/gdrive/My Drive/LaneAF/models/dla/DCNv2/src -I/usr/local/lib/python3.7/dist-packages/torch/include -I/usr/local/lib/python3.7/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.7/dist-packages/torch/include/TH -I/usr/local/lib/python3.7/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.7m -c /content/gdrive/My Drive/LaneAF/models/dla/DCNv2/src/vision.cpp -o build/temp.linux-x86_64-3.7/content/gdrive/My Drive/LaneAF/models/dla/DCNv2/src/vision.o -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=_ext -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
    g++ -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-OGiuun/python3.7-3.7.10=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-OGiuun/python3.7-3.7.10=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/content/gdrive/My Drive/LaneAF/models/dla/DCNv2/src -I/usr/local/lib/python3.7/dist-packages/torch/include -I/usr/local/lib/python3.7/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.7/dist-packages/torch/include/TH -I/usr/local/lib/python3.7/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.7m -c /content/gdrive/My Drive/LaneAF/models/dla/DCNv2/src/cpu/dcn_v2_cpu.cpp -o build/temp.linux-x86_64-3.7/content/gdrive/My Drive/LaneAF/models/dla/DCNv2/src/cpu/dcn_v2_cpu.o -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=_ext -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
    g++ -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-OGiuun/python3.7-3.7.10=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-OGiuun/python3.7-3.7.10=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/content/gdrive/My Drive/LaneAF/models/dla/DCNv2/src -I/usr/local/lib/python3.7/dist-packages/torch/include -I/usr/local/lib/python3.7/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.7/dist-packages/torch/include/TH -I/usr/local/lib/python3.7/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.7m -c /content/gdrive/My Drive/LaneAF/models/dla/DCNv2/src/cpu/dcn_v2_im2col_cpu.cpp -o build/temp.linux-x86_64-3.7/content/gdrive/My Drive/LaneAF/models/dla/DCNv2/src/cpu/dcn_v2_im2col_cpu.o -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=_ext -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
    g++ -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.7-OGiuun/python3.7-3.7.10=. -fstack-protector-strong -Wformat -Werror=format-security -g -fdebug-prefix-map=/build/python3.7-OGiuun/python3.7-3.7.10=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DWITH_CUDA -I/content/gdrive/My Drive/LaneAF/models/dla/DCNv2/src -I/usr/local/lib/python3.7/dist-packages/torch/include -I/usr/local/lib/python3.7/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.7/dist-packages/torch/include/TH -I/usr/local/lib/python3.7/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.7m -c /content/gdrive/My Drive/LaneAF/models/dla/DCNv2/src/cpu/dcn_v2_psroi_pooling_cpu.cpp -o build/temp.linux-x86_64-3.7/content/gdrive/My Drive/LaneAF/models/dla/DCNv2/src/cpu/dcn_v2_psroi_pooling_cpu.o -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=_ext -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
    /usr/local/cuda/bin/nvcc -DWITH_CUDA -I/content/gdrive/My Drive/LaneAF/models/dla/DCNv2/src -I/usr/local/lib/python3.7/dist-packages/torch/include -I/usr/local/lib/python3.7/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.7/dist-packages/torch/include/TH -I/usr/local/lib/python3.7/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.7m -c /content/gdrive/My Drive/LaneAF/models/dla/DCNv2/src/cuda/dcn_v2_cuda.cu -o build/temp.linux-x86_64-3.7/content/gdrive/My Drive/LaneAF/models/dla/DCNv2/src/cuda/dcn_v2_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=_ext -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_70,code=compute_70 -gencode=arch=compute_70,code=sm_70 -ccbin g++ -std=c++14
    /content/gdrive/My Drive/LaneAF/models/dla/DCNv2/src/cuda/dcn_v2_cuda.cu(126): error: identifier "THCudaBlas_SgemmBatched" is undefined
    
    /content/gdrive/My Drive/LaneAF/models/dla/DCNv2/src/cuda/dcn_v2_cuda.cu(273): error: identifier "THCudaBlas_Sgemm" is undefined
    
    2 errors detected in the compilation of "/content/gdrive/My Drive/LaneAF/models/dla/DCNv2/src/cuda/dcn_v2_cuda.cu".
    error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1
    
    opened by wyxogo 4
  • OpenCV resize will block while using multi-process.

    OpenCV resize will block while using multi-process.

    I've located the problem.

    https://github.com/sel118/LaneAF/blob/6f020686166660d63e4abf939901394c75bf12db/datasets/culane.py#L127

    Resize here will block the whole program.

    How to avoid this?

    opened by qinjian623 4
  • CUlane: Half the input-size

    CUlane: Half the input-size

    Hi!

    To reduce training time as I only have access to a 1080 Ti, I have reduced the input image size using the resize function you provided in the Dataloader from (1664, 576) to (832, 256). During inference, it always returns that the "Lane too small" as the fitted spline has only less than 10 values. Now, I wonder, what h_samples = (589, 240) in get_lanes_culane really represents and how you got to this value?

    I have set h_samples = (589 / 2, 240 / 2) to see what happens and now, I get splines with values around 15-25, so I'll probably set the threshold to 20 to see how the accuracy will be.

    Or would you just scale the input back to its original, i.e. (832 * 2, 256 * 2) during inference and keep everything as it is?

    Thanks in advance!

    opened by andy-96 3
  • bias=-2.19

    bias=-2.19

    Hey @arangesh, Thanks for the great method and the implementation. The results look very good. I was going through your code and came across a manually set bias here. I wonder where this constant came from and what it means.

    Thanks in advance, Sergey,

    opened by mishakin 2
  • parameter 'down_ratio' ablation study

    parameter 'down_ratio' ablation study

    Hello, have you ever tried different values of parameter 'down_ratio'? I find one line code of its acceptable values in class DLASeg. assert down_ratio in [2, 4, 8, 16] So can you tell me the performance contrast of this parameter? Thank you very much.

    opened by Jiayi719 2
  • How to reduce the FLOPs of the model, such as using a lightweight backbone?

    How to reduce the FLOPs of the model, such as using a lightweight backbone?

    Thanks for your sharing idea and wonderful job! the idea about using HAF and VAF for clustering lanes is impressive and enlightening. I have used your model on my own datasets and got a not a good results. Meanwhile I am thinking about how to reduce the FLOPs the entire model. Do you have any detailed suggestions,such as a detailed modification about backbone or other method? thanks again.

    opened by pandamax 2
  • Can't create video visualization in infer_tusimple.py

    Can't create video visualization in infer_tusimple.py

    I get the similar results in Tusimple dataset (for infer_tusimple.py)

    But... I can see the original img using cv2.imshow('img', img) When I try to show the img_out ( in line 130), I get the black screen.... cv2.imshow('img_img_out', img_out)

    I'm sure I input the --save-vizin the command, img has already be read and installed the opencv...

    As that make me not create the final 'out.mkv'.....

    opened by cphsuan 1
  • Multi-gpu training

    Multi-gpu training

    Add a new file train_culane_mp.py which is single-node , multi-gpu training script.

    The original train_culane.py was deleted while developing the new one, maybe we should restore it back.

    Default batch_size is still 2, but it should be too small for multi-gpu training. 64 is acceptable on a 4-V100 node.

    A new var metric_skips=10 is introduced. This could low the frequency of f1 metric which using CPU only.

    Start training with:

    CUDA_VISIBLE_DEVICES=4,5,6,7 python train_culane_mp.py ...
    

    to control how many GPUs involved in training.

    opened by qinjian623 1
  • Training time of CULane is too long

    Training time of CULane is too long

    Hello there,

    The training time of train_culane.py is too long. I have trained 5 five days, it only runs at 30epoch. (Nvidia 1080ti) Can we enlarge batch size or add multi-gpu training? Will it influence the performance?

    Thanks.

    opened by ztjsw 9
Owner
null
LaneDetectionAndLaneKeeping - Lane Detection And Lane Keeping

LaneDetectionAndLaneKeeping This project is part of my bachelor's thesis. The go

null 5 Jun 27, 2022
Lane assist for ETS2, built with the ultra-fast-lane-detection model.

Euro-Truck-Simulator-2-Lane-Assist Lane assist for ETS2, built with the ultra-fast-lane-detection model. This project was made possible by the amazing

null 36 Jan 5, 2023
Lane follower: Lane-detector (OpenCV) + Object-detector (YOLO5) + CAN-bus

Lane Follower This code is for the lane follower, including perception and control, as shown below. Environment Hardware Industrial Camera Intel-NUC(1

Siqi Fan 3 Jul 7, 2022
Find-Lane-Line - Use openCV library and Python to detect the road-lane-line

Find-Lane-Line This project is to use openCV library and Python to detect the road-lane-line. Data Pipeline Step one : Color Selection Step two : Cann

Kenny Cheng 3 Aug 17, 2022
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018

Learning Pixel-level Semantic Affinity with Image-level Supervision This code is deprecated. Please see https://github.com/jiwoon-ahn/irn instead. Int

Jiwoon Ahn 337 Dec 15, 2022
The code for SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network.

SAG-DTA The code is the implementation for the paper 'SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network'. Requirements py

Shugang Zhang 7 Aug 2, 2022
The code is for the paper "A Self-Distillation Embedded Supervised Affinity Attention Model for Few-Shot Segmentation"

SD-AANet The code is for the paper "A Self-Distillation Embedded Supervised Affinity Attention Model for Few-Shot Segmentation" [arxiv] Overview confi

cv516Buaa 9 Nov 7, 2022
A Data Annotation Tool for Semantic Segmentation, Object Detection and Lane Line Detection.(In Development Stage)

Data-Annotation-Tool How to Run this Tool? To run this software, follow the steps: git clone https://github.com/Autonomous-Car-Project/Data-Annotation

TiVRA AI 13 Aug 18, 2022
Example scripts for the detection of lanes using the ultra fast lane detection model in ONNX.

Example scripts for the detection of lanes using the ultra fast lane detection model in ONNX.

Ibai Gorordo 35 Sep 7, 2022
Example scripts for the detection of lanes using the ultra fast lane detection model in Tensorflow Lite.

TFlite Ultra Fast Lane Detection Inference Example scripts for the detection of lanes using the ultra fast lane detection model in Tensorflow Lite. So

Ibai Gorordo 12 Aug 27, 2022
Implementation of our paper 'RESA: Recurrent Feature-Shift Aggregator for Lane Detection' in AAAI2021.

RESA PyTorch implementation of the paper "RESA: Recurrent Feature-Shift Aggregator for Lane Detection". Our paper has been accepted by AAAI2021. Intro

null 137 Jan 2, 2023
CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution

CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution This is the official implementation code of the paper "CondLaneNe

Alibaba Cloud 311 Dec 30, 2022
VIL-100: A New Dataset and A Baseline Model for Video Instance Lane Detection (ICCV 2021)

Preparation Please see dataset/README.md to get more details about our datasets-VIL100 Please see INSTALL.md to install environment and evaluation too

null 82 Dec 15, 2022
Code for the IJCAI 2021 paper "Structure Guided Lane Detection"

SGNet Project for the IJCAI 2021 paper "Structure Guided Lane Detection" Abstract Recently, lane detection has made great progress with the rapid deve

Jinming Su 27 Dec 8, 2022
Python scripts for performing lane detection using the LSTR model in ONNX

ONNX LSTR Lane Detection Python scripts for performing lane detection using the Lane Shape Prediction with Transformers (LSTR) model in ONNX. Requirem

Ibai Gorordo 29 Aug 30, 2022
PyTorch implementation of 'Gen-LaneNet: a generalized and scalable approach for 3D lane detection'

(pytorch) Gen-LaneNet: a generalized and scalable approach for 3D lane detection Introduction This is a pytorch implementation of Gen-LaneNet, which p

Yuliang Guo 233 Jan 6, 2023
Use tensorflow to implement a Deep Neural Network for real time lane detection

LaneNet-Lane-Detection Use tensorflow to implement a Deep Neural Network for real time lane detection mainly based on the IEEE IV conference paper "To

MaybeShewill-CV 1.9k Jan 8, 2023
A lane detection integrated Real-time Instance Segmentation based on YOLACT (You Only Look At CoefficienTs)

Real-time Instance Segmentation and Lane Detection This is a lane detection integrated Real-time Instance Segmentation based on YOLACT (You Only Look

Jin 4 Dec 30, 2022
Implementation of "Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis"

Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis Abstract: This work targets at using a general deep lea

null 163 Dec 14, 2022