[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving

Overview

NEAT: Neural Attention Fields for End-to-End Autonomous Driving

Paper | Supplementary | Video | Poster | Blog

This repository is for the ICCV 2021 paper NEAT: Neural Attention Fields for End-to-End Autonomous Driving. The code and pre-trained models will be released here soon!

@inproceedings{Chitta2021ICCV,
  author = {Chitta, Kashyap and Prakash, Aditya and Geiger, Andreas},
  title = {NEAT: Neural Attention Fields for End-to-End Autonomous Driving},
  booktitle = {International Conference on Computer Vision (ICCV)},
  year = {2021}
}
Comments
  • The dataset format

    The dataset format

    Hello, Thank you for your great work if I generate the dataset correctly, what will be its folder placement structure? Could you give me an example?Just a screenshot is ok.

    opened by exiawsh 11
  • Data Generation via `run_evaluation.sh`

    Data Generation via `run_evaluation.sh`

    Hi, Thanks for the amazing work! I followed the Data Generation section in README.md(I did not modify any configurations and just activated the export of SAVE_PATH in the script). The following video shows the generated front images:

    temp_

    | As shown in the above video, the weather is changing, and this example(as one RouteScenario) consists of 94 frames (Different scenarios have different frames).

    For some scenarios, they met the failures like the followings: Snipaste_2022-02-08_23-57-16

    Does it work? Is there anything wrong with this?

    opened by lzhnb 7
  • How to visualize the semantic segmentation results on Bev?

    How to visualize the semantic segmentation results on Bev?

    Hello, It seems that You sampled 64 points for 5 different categories during data set preprocessing, but in fact, the semantic segmentation in Bev is a dense prediction task. For example, in a range of 200 * 200 pixels, 40000 points need to be sampled, so how to process and visualize the results in the inference stage?

    opened by exiawsh 5
  • Error when I run

    Error when I run "run_evaluation.sh"

    Hi, Thanks for the interesting work and well documentation!

    I follow the instructions provided in the README file to run the Autopilot but I face the following errors:

    1. I got this error (pic below) when I tried to run "run_evaluation.sh", I solved this problem by creating a directory named "carla_results". Is this the right solution?
    E1
    1. Now, I'm getting another error (pic below), what can be the problem?
    E2

    Any tips would be helpful.

    Thanks in advance!

    opened by mehdi-maleki 4
  • Where can I download dataset for training?

    Where can I download dataset for training?

    Thank you for providing a good code. Where can I download dataset for training? I referred to the config.py, but it seems to be different from the transfuser dataset. Transfuser provides Town0x_long, Town0x_short, and Town0x_tiny but it requires Town0x, Town0x_long, and Town0x_small.

    opened by jun-ja 2
  • Irregular shape of BEV

    Irregular shape of BEV

    Hi @kashyap7x , thanks for your wonderful work on NEAT. I would like to ask regarding the blobby nature of BEV output as compared to other BEV methods like LSS or Fiery, where BEV of vehicles is quite sharp. Can you tell some pointers on how we can improve this irregular vehicle bev coming in NEAT?

    image

    Thanks

    opened by UditSinghParihar 1
  • Is the dataset in neat the same as transfuser?

    Is the dataset in neat the same as transfuser?

    Hello, Thanks for your wonderful work! I have doubt with the data generation, Is the dataset in this work the same as in transfuser? Shall I download the data in https://s3.eu-central-1.amazonaws.com/avg-projects/transfuser/data/14_weathers_minimal_data.zip and directly use it? And what's the data format for query_points? just simple (x,y,t)->(2,5,1) as in the paper described? If the semantic map range is huge, did you normalize the input data?

    opened by exiawsh 1
  • Effects of semantic

    Effects of semantic

    Thanks for the work and the open source code. In the controller, I know that the use of red light for the longitudinal controller. But what if just use raw data input of RGB to identify instead of semantic? Could you please tell me the affect of the semantic? That is I think the semantic is just use to identify the traffic light.

    opened by Watson52 1
  • 4 waypoints used instead of 5?

    4 waypoints used instead of 5?

    Hi, I am taking a look at your code and I noticed you predict 5 waypoints in the decoder network, but you only use 4 waypoints for actually controlling the car. You do not use the first predicted waypoint and I was wondering why you would do that.

    Thanks in advance!

    opened by cozeybozey 1
  • How to get the visualization results as shown in README

    How to get the visualization results as shown in README

    I am not able to replicate the gif shown in README wherein you are able to visualize waypoint offsets too along with dashcam and semantic segmentation from BEV.

    Could you please share the scripts for the same?

    opened by chhanganivarun 1
  • How to generate training data for different model

    How to generate training data for different model

    Hi, Thanks for the interesting work !

    I follow the step of Data Generation provides in the README file, "./leaderboard/scripts/run_evaluation.sh" to run the Autopilot. But I havesome questions,as the following : (1) this process is only to evaluate the performance of different agents on the defined routes. No data was generated. (2) different models (agent) need different types of data. I want to know how to generate training data for different models.

    Any tips would be helpful.

    Thanks in advance!

    opened by hu-jerry 1
  • Porting NEAT to BeamNG.tech

    Porting NEAT to BeamNG.tech

    Dear All,

    I am a student working on a project involving testing ADAS/AV and scenario synthesis using BeamNG.tech and would love to run (test) your driving agent in that simulator. I know CARLA is kind of a de facto standard, but IMHO BeamNG.tech is superior when it comes to physic simulation, content, and flexibility. Further, BeamNG.tech is free for research, offers a python API, just like CARLA, and implements a wide range of sensors.

    So I wonder how technically difficult it would be to port NEAT to BeamNG.tech and whether anyone of you could support me (and my colleagues) in doing so. Hope to hear from you soon,

    Thanks!

    -- Benedikt Steininger

    opened by Stoneymon 0
  • FileNotFoundError: [Errno 2] No such file or directory: '../carla_results/auto_pilot.json'

    FileNotFoundError: [Errno 2] No such file or directory: '../carla_results/auto_pilot.json'

    when i Running the Autopilot

    ./leaderboard/scripts/run_evaluation.sh

    i accept this error :

    Traceback (most recent call last): File "leaderboard/leaderboard/leaderboard_evaluator.py", line 477, in main leaderboard_evaluator.run(arguments) File "leaderboard/leaderboard/leaderboard_evaluator.py", line 406, in run self.statistics_manager.clear_record(args.checkpoint) File "/home/ev/Net/carla_prj/neat/leaderboard/leaderboard/utils/statistics_manager.py", line 340, in clear_record with open(endpoint, 'w') as fd: FileNotFoundError: [Errno 2] No such file or directory: '../carla_results/auto_pilot.json' Exception ignored in: <function LeaderboardEvaluator.del at 0x7fe8e9a8a9e0>

    can you help? thank you very much

    opened by ZHICHENG12 1
Owner
null
[ICCV21] Self-Calibrating Neural Radiance Fields

Self-Calibrating Neural Radiance Fields, ICCV, 2021 Project Page | Paper | Video Author Information Yoonwoo Jeong [Google Scholar] Seokjun Ahn [Google

null 381 Dec 30, 2022
Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving

GSAN Introduction Code for paper GSAN: Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving, wh

YE Luyao 6 Oct 27, 2022
The official implementation of the Hybrid Self-Attention NEAT algorithm

PUREPLES - Pure Python Library for ES-HyperNEAT About This is a library of evolutionary algorithms with a focus on neuroevolution, implemented in pure

Adrian Westh 91 Dec 12, 2022
Repository to run object detection on a model trained on an autonomous driving dataset.

Autonomous Driving Object Detection on the Raspberry Pi 4 Description of Repository This repository contains code and instructions to configure the ne

Ethan 51 Nov 17, 2022
Official Repo for Ground-aware Monocular 3D Object Detection for Autonomous Driving

Visual 3D Detection Package: This repo aims to provide flexible and reproducible visual 3D detection on KITTI dataset. We expect scripts starting from

Yuxuan Liu 305 Dec 19, 2022
RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving

RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving (AAAI2021). RTS3D is efficiency and accuracy s

null 71 Nov 29, 2022
[arXiv] What-If Motion Prediction for Autonomous Driving ❓🚗💨

WIMP - What If Motion Predictor Reference PyTorch Implementation for What If Motion Prediction [PDF] [Dynamic Visualizations] Setup Requirements The W

William Qi 96 Dec 29, 2022
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

null 308 Jan 4, 2023
One Million Scenes for Autonomous Driving

ONCE Benchmark This is a reproduced benchmark for 3D object detection on the ONCE (One Million Scenes) dataset. The code is mainly based on OpenPCDet.

null 148 Dec 28, 2022
This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.

Self Driving Car An autonomous car (also known as a driverless car, self-driving car, and robotic car) is a vehicle that is capable of sensing its env

Sagor Saha 4 Sep 4, 2021
Self-Supervised Pillar Motion Learning for Autonomous Driving (CVPR 2021)

Self-Supervised Pillar Motion Learning for Autonomous Driving Chenxu Luo, Xiaodong Yang, Alan Yuille Self-Supervised Pillar Motion Learning for Autono

QCraft 101 Dec 5, 2022
Code repository for Semantic Terrain Classification for Off-Road Autonomous Driving

BEVNet Datasets Datasets should be put inside data/. For example, data/semantic_kitti_4class_100x100. Training BEVNet-S Example: cd experiments bash t

(Brian) JoonHo Lee 24 Dec 12, 2022
An unofficial personal implementation of UM-Adapt, specifically to tackle joint estimation of panoptic segmentation and depth prediction for autonomous driving datasets.

Semisupervised Multitask Learning This repository is an unofficial and slightly modified implementation of UM-Adapt[1] using PyTorch. This code primar

Abhinav Atrishi 11 Nov 25, 2022
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)

RTM3D-PyTorch The PyTorch Implementation of the paper: RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving (ECCV 2020

Nguyen Mau Dzung 271 Nov 29, 2022
Roach: End-to-End Urban Driving by Imitating a Reinforcement Learning Coach

CARLA-Roach This is the official code release of the paper End-to-End Urban Driving by Imitating a Reinforcement Learning Coach by Zhejun Zhang, Alexa

Zhejun Zhang 118 Dec 28, 2022
(CVPR 2022) A minimalistic mapless end-to-end stack for joint perception, prediction, planning and control for self driving.

LAV Learning from All Vehicles Dian Chen, Philipp Krähenbühl CVPR 2022 (also arXiV 2203.11934) This repo contains code for paper Learning from all veh

Dian Chen 300 Dec 15, 2022
🐤 Nix-TTS: An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation

?? Nix-TTS An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation Rendi Chevi, Radityo Eko Prasojo, Alham Fikri Aji

Rendi Chevi 156 Jan 9, 2023
[ICCV21] Code for RetrievalFuse: Neural 3D Scene Reconstruction with a Database

RetrievalFuse Paper | Project Page | Video RetrievalFuse: Neural 3D Scene Reconstruction with a Database Yawar Siddiqui, Justus Thies, Fangchang Ma, Q

Yawar Nihal Siddiqui 75 Dec 22, 2022
An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates neural fields, predictive coding, top-down-bottom-up, and attention (consensus between columns)

GLOM - Pytorch (wip) An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates neural fields, predictive coding,

Phil Wang 173 Dec 14, 2022