The official GitHub repository for the Argoverse 2 dataset.

Overview

PyPI Versions CI Status License: MIT

Argoverse 2 API

Official GitHub repository for the Argoverse 2 family of datasets.

If you have any questions or run into any problems with either the data or API, please feel free to open a GitHub issue!

TL;DR

  • Install the API: pip install av2
  • Read the instructions to download the data.

Overview

Getting Started

Setup

The easiest way to install the API is via pip by running the following command:

pip install av2

Datasets

The Argoverse 2 family consists of four distinct datasets:

Dataset Name Scenarios Camera Imagery Lidar Maps Additional Information
Sensor 1,000 Sensor Dataset README
Lidar 20,000 Lidar Dataset README
Motion Forecasting 250,000 Motion Forecasting Dataset README
Map Change (Trust, but Verify) 1,045 Map Change Dataset README

Please see DOWNLOAD.md for detailed instructions on how to download each dataset.

Map API

Please refer to the map README for additional details about the common format for vector and raster maps that we employ across all AV2 datasets.

Compatibility Matrix

Python Version linux macOS windows
3.8
3.9
3.10

Testing

All incoming pull requests are tested using nox as part of the CI process. This ensures that the latest version of the API is always stable on all supported platforms. You can run the full suite of automated checks and tests locally using the following command:

nox -r

Contributing

Have a cool feature you'd like to add? Found an unhandled corner case? The Argoverse team welcomes contributions from the open source community - please open a PR using the following template!

Citing

Please use the following citation when referencing the Argoverse 2 Sensor, Lidar, or Motion Forecasting Datasets:

@INPROCEEDINGS { Argoverse2,
  author = {Benjamin Wilson and William Qi and Tanmay Agarwal and John Lambert and Jagjeet Singh and Siddhesh Khandelwal and Bowen Pan and Ratnesh Kumar and Andrew Hartnett and Jhony Kaesemodel Pontes and Deva Ramanan and Peter Carr and James Hays},
  title = {Argoverse 2: Next Generation Datasets for Self-Driving Perception and Forecasting},
  booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS Datasets and Benchmarks 2021)},
  year = {2021}
}

Use the following citation when referencing the Argoverse 2 Map Change Dataset:

@INPROCEEDINGS { TrustButVerify,
  author = {John Lambert and James Hays},
  title = {Trust, but Verify: Cross-Modality Fusion for HD Map Change Detection},
  booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS Datasets and Benchmarks 2021)},
  year = {2021}
}

License

All code provided within this repository is released under the MIT license and bound by the Argoverse terms of use, please see LICENSE and NOTICE for additional details.

Comments
  • Downloading the tbv dataset.

    Downloading the tbv dataset.

    I'm trying to download the tbv dataset and it seems there are two instructions to do so. Do these two methods produce the same result?

    One here:

    1. https://github.com/argoai/argoverse2-api/blob/main/DOWNLOAD.md s5cmd --no-sign-request cp s3://argoai-argoverse/av2/tbv/* target-directory

    And another here: 2. https://github.com/argoai/argoverse2-api/blob/main/src/av2/datasets/tbv/README.md SHARD_DIR={DESIRED PATH FOR TAR.GZ files} s5cmd cp s3://argoai-argoverse/av2/tars/tbv/*.tar.gz ${SHARD_DIR}

    When I try 1, I get an error "s5cmd is hitting the max open file limit allowed by your OS. Either increase the open file limit or try to decrease the number of workers with '-numworkers' parameter'.

    When I try 2, I get an error "Error session: fetching region failed: NoCredentialProviders: no valid providers in chain. Deprecated."

    1. probably downloads half of the dataset, while 2. doesn't initiate the download. I will probably continue with 1, but 2. probably is faster. I'm using Linux Ubuntu 18.04.
    opened by tom-bu 13
  • What is the format of the submission for 3D object detection competition?

    What is the format of the submission for 3D object detection competition?

    The Submission Guidelines have nothing about the submission format, could you give more details? Or could you provide a submission sample? Thank you very much!

    opened by fangjin-cool 7
  • questions for visualization

    questions for visualization

    Dear all:

    When I run the 'generate_sensor_dataset_visualizations.py' file, it alway report the error that: No such file or directory. I check the difference and found that the error path is '/.../argv2/SensorDataset/sensor/SensorDataset_val/5589de60-1727-3e3f-9423-33437fc5da4b/sensors/lidar/315967919259399000.feather' and the true path is '/.../argv2/SensorDataset/sensor/val/5589de60-1727-3e3f-9423-33437fc5da4b/sensors/lidar/315967919259399000.feather'. Is any parameter in the program that needs to be debugged or something else? Hoping for your reply and thanks so much.

    opened by tommygojerry 5
  • Argoverse 2.0 vs Argoverse 1.1 API

    Argoverse 2.0 vs Argoverse 1.1 API

    Hi folks,

    I am trying to run my model in Argoverse 2.0, which was previously trained using 1.1 and its corresponding API. Nevertheless, after installing and cloning the API, in order to check the tutorials, dataloaders etc, this api looks quite smaller than Argoverse 1.1. and the organization also seems to be different (e.g. where are the csvs with the trajectories?). Where could I see all the required documentation?

    opened by Cram3r95 5
  • lane label annotation method inquery.

    lane label annotation method inquery.

    hi, since there is no information about how the lane markings are labeled in the argvoerse-v2 dataset. I wonder if these lane marking labels are annotated in the originally collected point cloud (labeling in 3D space), or if it is annotated on the image by projecting the point cloud onto the corresponding image.

    hope you can help me figure out and thanks in advance :)

    question 
    opened by Mollylulu 4
  • Similarity argoverse 1 / argoverse 2

    Similarity argoverse 1 / argoverse 2

    Hey, the argoverse 2 dataset comes with new and richer scenes. Comparing the scenes of av1 to av2 in the respective cities: How similar what you consider them? So, in short: Would you say training with argoverse 2 includes all the relevant data to perform well on argoverse 1? I would be particularly interested in the motion forecasting dataset. Looking forward to your answer! Thanks a lot!

    question 
    opened by odunkel 4
  • Motion forecasting: Focal agent not always observered over the full scenario length

    Motion forecasting: Focal agent not always observered over the full scenario length

    Hey everyone,

    I had a look into the motion forecasting dataset and there seems to be an issue with the trajectories of the focal agent. According to the paper, the focal agent should always be observed over the full 11 seconds, which then corresponds to 110 observations: "Within each scenario, we mark a single track as the “focal agent". Focal tracks are guaranteed to be fully observed throughout the duration of the scenario and have been specifically selected to maximize interesting interactions with map features and other nearby actors (see Section 3.3.2)"

    However, this is not the case for some scenarios (~3% of the scenarios). One example: Scenario '0215552f-6951-47e5-8cf6-3d1351d28957' of the validation set has a trajectory with only 104 observations.

    Can you reproduce my problem? Is this intended or can we expect this to be fixed in the near future?

    Looking forward hearing from you!

    Best regards

    SchDevel

    bug 
    opened by SchDevel 4
  • How to evaluate 3D object detection on validation split?

    How to evaluate 3D object detection on validation split?

    Thanks for your excellent work! I would like to know how to do the evaluation of 3D object detection on validation split. And I notice there is PR about this. When will the stable version be released? I am looking forward to it!

    opened by Abyssaledge 4
  • Is it possible to extract the route information ?

    Is it possible to extract the route information ?

    Hi, thank you for providing the outstanding dataset.

    I am particularly interested in the motion dataset, and have a question that is it possible to extract the route of the self-driving vehicles in each scenario?

    opened by panda2020-sky 4
  • Error with generate_sensor_dataset_visualizations.py

    Error with generate_sensor_dataset_visualizations.py

    Hi, when i run python tutorials/generate_sensor_dataset_visualizations.py -d /xxx/av2, I got the error: FileNotFoundError: [Errno 2] Failed to open local file '/xxx/av2/test/0c6e62d7-bdfa-3061-8d3d-03b13aa21f68/annotations.feather'. Detail: [errno 2] No such file or directory. The test set has no label. Why is it not filtered out in the code? What is the correct command to run this py file? Thanks.

    question 
    opened by DuZzzs 3
  • Follow up for https://github.com/argoai/av2-api/issues/77

    Follow up for https://github.com/argoai/av2-api/issues/77

    Hi,

    Sorry for the delay. Thank you for your help! I went through the dataset API and was able to isolate individual point clouds.

    Joint(L), Top (R) image

    Top(L), Bottom (R) image

    Does this look sensible? Here is the code snippet. ` dataset = SensorDataloader(Path(settings.argoverse_dataset_root), with_annotations=True, with_cache=True) for index, data_frame in enumerate(dataset): sweep = data_frame.sweep # has lidar info annotations = data_frame.annotations # has boxes pose = data_frame.timestamp_city_SE3_ego_dict

        # get the lidar - both combined into single pcl
        pcl_joint = sweep.xyz
    
        # append reflectances and laser numbers
        pcl_joint = np.hstack([pcl_joint, np.expand_dims(sweep.intensity, -1), 
                                            np.expand_dims(sweep.laser_number, -1)])
    
        # laser number [0, 31] -> top lidar, [32, 63] -> bottom lidar
        r_up = np.where(pcl_joint[:, -1] < 32)
        pcl_up = pcl_joint[r_up]  # get top lidar point cloud
    
        r_down = np.where(pcl_joint[:, -1] >= 32)
        pcl_down = pcl_joint[r_down]
    

    `

    Please let me know if this is the correct way, just to be sure.

    Best Regards Sambit

    opened by SM1991CODES 2
  • centerline of static map

    centerline of static map

    I noticed that we have two methods to get the centerline of lane_segment. First, we just get the data from raw map file. Second, we can use the function of class ArgoverseStaticMap which is "get_lane_segment_centerline" to get the centerline. I wanna know the difference of these two methods.

    opened by ChevinB 0
  • Interestingness score

    Interestingness score

    Hey,

    you were roughly explaining the interestingness score in your paper and in the supplementaries. Are you planning to share more details about the process of selecting interesting scenarios or is this confidential functionality?

    I am looking forward to your answer.

    Best regards

    opened by odunkel 0
  • Path issue in from_map_dir function of map_api

    Path issue in from_map_dir function of map_api

    The vector_data_json_path variable seems to extract the wrong path definition (with relative path being passed in the Map_Tutorial notebook)

    Setting it to just vector_data_fname seems to be working for me -- instead of log_map_dirpath / vector_data_fname

    Check it out please?

    Thanks!

    opened by Shivanshu17 1
  • Pytorch Dataloader.

    Pytorch Dataloader.

    PR Summary

    Testing

    In order to ensure this PR works as intended, it is:

    • [ ] unit tested.
    • [ ] other or not applicable (additional detail/rationale required)

    Compliance with Standards

    As the author, I certify that this PR conforms to the following standards:

    • [ ] Code changes conform to PEP8 and docstrings conform to the Google Python style guide.
    • [ ] A well-written summary explains what was done and why it was done.
    • [ ] The PR is adequately tested and the testing details and links to external results are included.
    opened by benjaminrwilson 0
  • timestamps_ns in motion forecast dataset

    timestamps_ns in motion forecast dataset

    I tried to convert timestamps_ns assuming epoch format and all scenarios seem to refer to date and time in the year 1980. Has there been any deliberate anonymization of the timestamp or am I doing the conversion wrong?

    Thanks in advance!

    opened by sun1612 0
Releases(v0.2.1)
  • v0.2.1(Jun 2, 2022)

    What's Changed

    • Add UNKNOWN lane mark type to map schema by @wqi in https://github.com/argoai/av2-api/pull/58
    • Competition announcements by @benjaminrwilson in https://github.com/argoai/av2-api/pull/57
    • Add additional 3D object detection submission details. by @benjaminrwilson in https://github.com/argoai/av2-api/pull/63

    Full Changelog: https://github.com/argoai/av2-api/compare/v0.2.0...v0.2.1

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(May 5, 2022)

    • Evaluation code is now available to 3D object detection and motion forecasting.

    What's Changed

    • Update README.md by @benjaminrwilson in https://github.com/argoai/av2-api/pull/6
    • Add gifs to TbV readme by @senselessdev1 in https://github.com/argoai/av2-api/pull/10
    • Fix broken link to Argoverse website in motion forecasting readme by @senselessdev1 in https://github.com/argoai/av2-api/pull/13
    • add support for rendering LaneMarkType.SOLID_DASH_WHITE in EgoViewMapRenderer by @senselessdev1 in https://github.com/argoai/av2-api/pull/9
    • Replace TbV gifs to illustrate map changes more clearly by @senselessdev1 in https://github.com/argoai/av2-api/pull/15
    • Update README.md by @benjaminrwilson in https://github.com/argoai/av2-api/pull/16
    • Fix typo in Sensor Dataset readme by @senselessdev1 in https://github.com/argoai/av2-api/pull/19
    • Improve TbV Download Instructions by @senselessdev1 in https://github.com/argoai/av2-api/pull/14
    • Add city distribution for logs to Sensor Dataset Readme by @senselessdev1 in https://github.com/argoai/av2-api/pull/22
    • Clarify which datasets certain tutorials apply to by @senselessdev1 in https://github.com/argoai/av2-api/pull/24
    • Add get_city_name() method to dataloader, to fetch name of city where a log was captured. by @senselessdev1 in https://github.com/argoai/av2-api/pull/27
    • Small formatting fixes. by @benjaminrwilson in https://github.com/argoai/av2-api/pull/33
    • Fix map tutorial issues. by @benjaminrwilson in https://github.com/argoai/av2-api/pull/35
    • Update ci.yml by @benjaminrwilson in https://github.com/argoai/av2-api/pull/5
    • 3D Object Detection Evaluation by @benjaminrwilson in https://github.com/argoai/av2-api/pull/31
    • Add converter between AV2 city coordinate systems, and WGS84 and UTM by @senselessdev1 in https://github.com/argoai/av2-api/pull/28
    • Add get_ordered_log_lidar_timestamps() method to Sensor / TbV dataloa… by @senselessdev1 in https://github.com/argoai/av2-api/pull/29
    • Add TbV log clustering by scene (i.e. spatial location). by @senselessdev1 in https://github.com/argoai/av2-api/pull/26
    • 3D Detection Eval docstrings + typing fixes. by @benjaminrwilson in https://github.com/argoai/av2-api/pull/40
    • Add integration test to verify that TbV download was successful by @senselessdev1 in https://github.com/argoai/av2-api/pull/23
    • Sensor Dataset Visualization by @benjaminrwilson in https://github.com/argoai/av2-api/pull/39
    • Add dataclass for AV2 MF challenge submissions by @wqi in https://github.com/argoai/av2-api/pull/41
    • Add Brier metrics to motion forecasting evaluation module by @wqi in https://github.com/argoai/av2-api/pull/44
    • Detection evaluation tweaks by @benjaminrwilson in https://github.com/argoai/av2-api/pull/48
    • v0.1.0 -> v0.1.1 by @benjaminrwilson in https://github.com/argoai/av2-api/pull/49
    • Update setup.cfg to add pypi metadata by @wqi in https://github.com/argoai/av2-api/pull/51
    • Update init.py by @benjaminrwilson in https://github.com/argoai/av2-api/pull/52

    New Contributors

    • @benjaminrwilson made their first contribution in https://github.com/argoai/av2-api/pull/6
    • @senselessdev1 made their first contribution in https://github.com/argoai/av2-api/pull/10
    • @wqi made their first contribution in https://github.com/argoai/av2-api/pull/41

    Full Changelog: https://github.com/argoai/av2-api/compare/v0.1.0...v0.2.0

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Mar 17, 2022)

This is the official source code for SLATE. We provide the code for the model, the training code, and a dataset loader for the 3D Shapes dataset. This code is implemented in Pytorch.

SLATE This is the official source code for SLATE. We provide the code for the model, the training code and a dataset loader for the 3D Shapes dataset.

Gautam Singh 66 Dec 26, 2022
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

NingWang 236 Dec 22, 2022
Official repository for "Action-Based Conversations Dataset: A Corpus for Building More In-Depth Task-Oriented Dialogue Systems"

Action-Based Conversations Dataset (ABCD) This respository contains the code and data for ABCD (Chen et al., 2021) Introduction Whereas existing goal-

ASAPP Research 49 Oct 9, 2022
The GitHub repository for the paper: “Time Series is a Special Sequence: Forecasting with Sample Convolution and Interaction“.

SCINet This is the original PyTorch implementation of the following work: Time Series is a Special Sequence: Forecasting with Sample Convolution and I

null 386 Jan 1, 2023
This GitHub repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.'

About Repository This repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.' About Code

Arun Verma 1 Nov 9, 2021
This is the dataset and code release of the OpenRooms Dataset.

This is the dataset and code release of the OpenRooms Dataset.

Visual Intelligence Lab of UCSD 95 Jan 8, 2023
A large dataset of 100k Google Satellite and matching Map images, resembling pix2pix's Google Maps dataset.

Larger Google Sat2Map dataset This dataset extends the aerial ⟷ Maps dataset used in pix2pix (Isola et al., CVPR17). The provide script download_sat2m

null 34 Dec 28, 2022
Dataset used in "PlantDoc: A Dataset for Visual Plant Disease Detection" accepted in CODS-COMAD 2020

PlantDoc: A Dataset for Visual Plant Disease Detection This repository contains the Cropped-PlantDoc dataset used for benchmarking classification mode

Pratik Kayal 109 Dec 29, 2022
EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering

MADE (Multi-Adapter Dataset Experts) This repository contains the implementation of MADE (Multi-adapter dataset experts), which is described in the pa

Princeton Natural Language Processing 68 Jul 18, 2022
EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering

MADE (Multi-Adapter Dataset Experts) This repository contains the implementation of MADE (Multi-adapter dataset experts), which is described in the pa

Princeton Natural Language Processing 39 Oct 5, 2021
LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation (NeurIPS2021 Benchmark and Dataset Track)

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation by Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zh

Kingdrone 174 Dec 22, 2022
The Habitat-Matterport 3D Research Dataset - the largest-ever dataset of 3D indoor spaces.

Habitat-Matterport 3D Dataset (HM3D) The Habitat-Matterport 3D Research Dataset is the largest-ever dataset of 3D indoor spaces. It consists of 1,000

Meta Research 62 Dec 27, 2022
Official repo for SemanticGAN https://nv-tlabs.github.io/semanticGAN/

SemanticGAN This is the official code for: Semantic Segmentation with Generative Models: Semi-Supervised Learning and Strong Out-of-Domain Generalizat

null 151 Dec 28, 2022
Repository to run object detection on a model trained on an autonomous driving dataset.

Autonomous Driving Object Detection on the Raspberry Pi 4 Description of Repository This repository contains code and instructions to configure the ne

Ethan 51 Nov 17, 2022
This repository contains the code for using the H3DS dataset introduced in H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction

H3DS Dataset This repository contains the code for using the H3DS dataset introduced in H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction Access

Crisalix 72 Dec 10, 2022
This repository is to support contributions for tools for the Project CodeNet dataset hosted in DAX

The goal of Project CodeNet is to provide the AI-for-Code research community with a large scale, diverse, and high quality curated dataset to drive innovation in AI techniques.

International Business Machines 1.2k Jan 4, 2023
Repository for the Bias Benchmark for QA dataset.

BBQ Repository for the Bias Benchmark for QA dataset. Authors: Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Tho

ML² AT CILVR 18 Nov 18, 2022