Model Zoo of BDD100K Dataset

Overview

BDD100K Model Zoo

In this repository, we provide popular models for each task in the BDD100K dataset.

teaser

For each task in the BDD100K dataset, we make publicly available the model weights, evaluation results, predictions, visualizations, as well as scripts to performance evaluation and visualization. The goal is to provide a set of competitive baselines to facilitate research and provide a common benchmark for comparison.

The number of pre-trained models in this zoo is 1️⃣ 1️⃣ 5️⃣ . You can include your models in this repo as well! See contribution instructions.

This repository currently supports the tasks listed below. For more information about each task, click on the task name. We plan to support all tasks in the BDD100K dataset eventually; see the roadmap for our plan and progress.

If you have any questions, please go to the BDD100K discussions.

Roadmap

  • Lane marking
  • Panoptic segmentation
  • Pose estimation

Dataset

Please refer to the dataset preparation instructions for how to prepare and use the BDD100K dataset with the models.

Maintainers

Citation

To cite the BDD100K dataset in your paper,

@InProceedings{bdd100k,
    author = {Yu, Fisher and Chen, Haofeng and Wang, Xin and Xian, Wenqi and Chen,
              Yingying and Liu, Fangchen and Madhavan, Vashisht and Darrell, Trevor},
    title = {BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning},
    booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2020}
}
Comments
  • Using the models to predict on other Images

    Using the models to predict on other Images

    Hi,

    can i use the models under "bdd100k-models/det/" to make predictions on other images ?

    When i followed the "Usage"-Section, it seems that the models can only be used to evaluate the Test/Val Images.

    opened by askppp 5
  • Drivable Segmentation Model inference stuck

    Drivable Segmentation Model inference stuck

    When I am running Deeplabv3+ model by using: python ./test.py configs/drivable/deeplabv3plus_r50-d8_512x1024_40k_drivable_bdd100k.py --format-only --format-dir output It just stuck in around 1490 step image I have tried several different configs, they all have the same issue.

    opened by danielzhangau 4
  • Generate semantic segmentation output as png

    Generate semantic segmentation output as png

    Hello,

    I'm generating semantic segmentation using the following command.

    python ./test.py ~/config.py --show-dir ~/Documents/bdd100k-models/data/bdd100k/labels/seg_track_20/val --opacity 1
    

    This generates the colormaps for the images, however, the output produced is in .jpg format which results in blur within the labels (as shown below.) How can I update the script so that it generates the labels in png format. My input images are from the MOTS 2020 Images dataset, which are in jpg format.

    image

    opened by digvijayad 2
  • Sem_Seg Inference Error - RuntimeError: DataLoader worker is killed by signal: Segmentation fault.

    Sem_Seg Inference Error - RuntimeError: DataLoader worker is killed by signal: Segmentation fault.

    Error when running Sem_seg model inference Command Run: python ./test.py ./configs/sem_seg/deeplabv3+_r50-d8_512x1024_40k_sem_seg_bdd100k.py --format-only --format-dir ./outputs

    ERROR:

    workers per gpu=2
    /home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/mmseg/models/losses/cross_entropy_loss.py:235: UserWarning: Default ``avg_non_ignore`` is False, if you would like to ignore the certain label and average loss over non-ignore labels, which is the same with PyTorch official cross_entropy, set ``avg_non_ignore=True``.
      warnings.warn(
    load checkpoint from http path: https://dl.cv.ethz.ch/bdd100k/sem_seg/models/deeplabv3+_r50-d8_512x1024_40k_sem_seg_bdd100k.pth
    'CLASSES' not found in meta, use dataset.CLASSES instead
    'PALETTE' not found in meta, use dataset.PALETTE instead
    [                                                  ] 0/1000, elapsed: 0s, ETA:ERROR: Unexpected segmentation fault encountered in worker.
    ERROR: Unexpected segmentation fault encountered in worker.
    ERROR: Unexpected segmentation fault encountered in worker.
    Traceback (most recent call last):
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1011, in _try_get_data
        data = self._data_queue.get(timeout=timeout)
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/queue.py", line 179, in get
        self.not_empty.wait(remaining)
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/threading.py", line 306, in wait
        gotit = waiter.acquire(True, timeout)
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
        _error_if_any_worker_fails()
    RuntimeError: DataLoader worker (pid 15796) is killed by signal: Segmentation fault. 
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "./test.py", line 174, in <module>
        main()
      File "./test.py", line 150, in main
        outputs = single_gpu_test(
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/mmseg/apis/test.py", line 89, in single_gpu_test
        for batch_indices, data in zip(loader_indices, data_loader):
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 530, in __next__
        data = self._next_data()
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1207, in _next_data
        idx, data = self._get_data()
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1163, in _get_data
        success, data = self._try_get_data()
      File "/home/lunet/codsn/.conda/envs/bdd100k-mmseg/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1024, in _try_get_data
        raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
    RuntimeError: DataLoader worker (pid(s) 15796) exited unexpectedly
    
    opened by digvijayad 2
  • red traffic lights

    red traffic lights

    Hello, thanks for your marvelous contribution.I would like to know that the category of red traffic lights is not available on bdd, have you re-labeled it on the bdd dataset?

    opened by liluxing153 1
  • tagging:finetune possibilities

    tagging:finetune possibilities

    hi hi, thanks for your marvelous contribution. I am very impressed. Now I want to apply this pretrain model(tagging road type and weather) on my own dataset, do you have any codebase for finetuning?

    opened by anran1231 1
  • Semantic segmetation ;common settings  MMSegmentation link not working

    Semantic segmetation ;common settings MMSegmentation link not working

    https://github.com/open-mmlab/mmsegmentation/blob/master/docs/model_zoo.md#common-settings The above link is not working

    I would like to know the settings under which the segmentation models are trained , so that i can replicate the results . thank you.

    opened by 100daggers 1
  • Issue in converting the instance segmentation mask encoding from bdd100k to coco

    Issue in converting the instance segmentation mask encoding from bdd100k to coco

    Hello,

    I am trying to convert the bdd100k instance segmentation using this command: python3 -m bdd100k.label.to_coco -m ins_seg --only-mask -i ./bdd100k/labels/ins_seg/bitmasks/val -o ./ins_seg_val_cocofmt_v2.json

    Also, tried this: python3 -m bdd100k.label.to_coco -m ins_seg -i ./bdd100k/labels/ins_seg/polygons/ins_seg_val.json -o ./ins_seg_val_cocofmt_v3.json -mb ./bdd100k/labels/ins_seg/bitmasks/val

    The conversion is successful in both cases and the annotation looks like this

    Screen Shot 2022-01-07 at 11 46 36 AM ** that's not how coco annotations are.

    Now, if you see the segmentation field above there's string encoding of the masks. Now, I am unsure if that's expected or not.

    Further, assuming it's correct, I tried to load the annotations using loader from DETR https://github.com/facebookresearch/detr/blob/091a817eca74b8b97e35e4531c1c39f89fbe38eb/datasets/coco.py#L36

    The line I have mentioned above is supposed to do the conversion but I am getting an error from the pycocotools that it's not expecting a string in the mask. Screen Shot 2022-01-07 at 11 53 51 AM

    So, I am unsure where the problem is? Is the conversion correct to coco then the loader should work? Note: I tried to convert the detections and they worked fine.

    Thank you for any help you can provide.

    opened by sfarkya04 1
  • How to train on my own gpu?

    How to train on my own gpu?

    Hello! thank you for your work~~but i wonder if i could train these models on my own gpu? i wonder if there are som instructions or usages? plz ,thank u!

    opened by StefanYz 1
Releases(v1.1.0)
  • v1.1.0(Dec 2, 2021)

    BDD100K Models 1.1.0 Release

    teaser

    • Highlights
    • New Task: Pose Estimation
    • New Models

    Highlights

    In this release, we provide over 20 pre-trained models for the new pose estimation task in BDD100K, along with evaluation and visualization tools. We also provide over 30 new models for object detection, instance segmentation, semantic segmentation, and drivable area.

    New Task: Pose Estimation

    With the release of 2D human pose estimation data in BDD100K, we provide pre-trained models in this repo.

    • Pose estimation
      • ResNet, MobileNetV2, HRNet, and more.

    New Models

    We provide additional models for previous tasks

    • Object detection
      • Libra R-CNN, HRNet.
    • Instance segmentation
      • GCNet, HRNet.
    • Semantic segmentation / drivable area
      • NLNet, PointRend.
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Oct 29, 2021)

    BDD100K Models 1.0.0 Release

    teaser

    • Highlights
    • Tasks
    • Models
    • Contribution

    Highlights

    The model zoo for BDD100K, the largest driving video dataset, is open for business! It contains more than 100 pre-trained models for 7 tasks. Each model also comes with results and visualization on val and test sets. We also provide documentation for community contribution so that everyone can include their models in this repo.

    Tasks

    We currently support 7 tasks

    • Image Tagging
    • Object Detection
    • Instance Segmentation
    • Semantic Segmentation
    • Drivable Area
    • Multiple Object Tracking (MOT)
    • Multiple Object Tracking and Segmentation (MOTS)

    Each task includes

    • Official evaluation results, model weights, predictions, and visualizations.
    • Detailed instructions for evaluation and visualization.

    Models

    We include popular network models for each task

    • Image tagging
      • VGG, ResNet, and DLA.
    • Object detection
      • Cascade R-CNN, Sparse R-CNN, Deformable ConvNets v2, and more.
    • Instance segmentation
      • Mask R-CNN, Cascade Mask R-CNN, HRNet, and more.
    • Semantic segmentation / drivable area
      • Deeplabv3+, CCNet, DNLNet, and more.
    • Multiple object tracking (MOT)
      • QDTrack.
    • Multiple object tracking and segmentation (MOTS)
      • PCAN.

    Contribution

    We encourage the BDD100K dataset users to contribute their models to this repo, so that all the info can be used for further result reproduction and analysis. The detailed instruction and model submission template are at the contribution page.

    Source code(tar.gz)
    Source code(zip)
Owner
ETH VIS Group
Visual Intelligence and Systems Group at ETH Zürich
ETH VIS Group
Model Zoo for AI Model Efficiency Toolkit

We provide a collection of popular neural network models and compare their floating point and quantized performance.

Qualcomm Innovation Center 137 Jan 3, 2023
A PaddlePaddle version image model zoo.

Paddle-Image-Models English | 简体中文 A PaddlePaddle version image model zoo. Install Package Install by pip: $ pip install ppim Install by wheel package

AgentMaker 131 Dec 7, 2022
The Hailo Model Zoo includes pre-trained models and a full building and evaluation environment

Hailo Model Zoo The Hailo Model Zoo provides pre-trained models for high-performance deep learning applications. Using the Hailo Model Zoo you can mea

Hailo 50 Dec 7, 2022
Transfer Learning Shootout for PyTorch's model zoo (torchvision)

pytorch-retraining Transfer Learning shootout for PyTorch's model zoo (torchvision). Load any pretrained model with custom final layer (num_classes) f

Alexander Hirner 169 Jun 29, 2022
The DL Streamer Pipeline Zoo is a catalog of optimized media and media analytics pipelines.

The DL Streamer Pipeline Zoo is a catalog of optimized media and media analytics pipelines. It includes tools for downloading pipelines and their dependencies and tools for measuring their performace.

null 8 Dec 4, 2022
Classification models 1D Zoo - Keras and TF.Keras

Classification models 1D Zoo - Keras and TF.Keras This repository contains 1D variants of popular CNN models for classification like ResNets, DenseNet

Roman Solovyev 12 Jan 6, 2023
This is the official source code for SLATE. We provide the code for the model, the training code, and a dataset loader for the 3D Shapes dataset. This code is implemented in Pytorch.

SLATE This is the official source code for SLATE. We provide the code for the model, the training code and a dataset loader for the 3D Shapes dataset.

Gautam Singh 66 Dec 26, 2022
LIAO Shuiying 6 Dec 1, 2022
In this project we investigate the performance of the SetCon model on realistic video footage. Therefore, we implemented the model in PyTorch and tested the model on two example videos.

Contrastive Learning of Object Representations Supervisor: Prof. Dr. Gemma Roig Institutions: Goethe University CVAI - Computational Vision & Artifici

Dirk Neuhäuser 6 Dec 8, 2022
Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

El Bruno 3 Mar 30, 2022
Official Implementation and Dataset of "PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask and Group-Level Consistency", CVPR 2021

Portrait Photo Retouching with PPR10K Paper | Supplementary Material PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask an

null 184 Dec 11, 2022
This is the dataset and code release of the OpenRooms Dataset.

This is the dataset and code release of the OpenRooms Dataset.

Visual Intelligence Lab of UCSD 95 Jan 8, 2023
A large dataset of 100k Google Satellite and matching Map images, resembling pix2pix's Google Maps dataset.

Larger Google Sat2Map dataset This dataset extends the aerial ⟷ Maps dataset used in pix2pix (Isola et al., CVPR17). The provide script download_sat2m

null 34 Dec 28, 2022
Dataset used in "PlantDoc: A Dataset for Visual Plant Disease Detection" accepted in CODS-COMAD 2020

PlantDoc: A Dataset for Visual Plant Disease Detection This repository contains the Cropped-PlantDoc dataset used for benchmarking classification mode

Pratik Kayal 109 Dec 29, 2022
EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering

MADE (Multi-Adapter Dataset Experts) This repository contains the implementation of MADE (Multi-adapter dataset experts), which is described in the pa

Princeton Natural Language Processing 68 Jul 18, 2022
EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering

MADE (Multi-Adapter Dataset Experts) This repository contains the implementation of MADE (Multi-adapter dataset experts), which is described in the pa

Princeton Natural Language Processing 39 Oct 5, 2021
LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation (NeurIPS2021 Benchmark and Dataset Track)

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation by Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zh

Kingdrone 174 Dec 22, 2022