Code repository for Semantic Terrain Classification for Off-Road Autonomous Driving

Overview

BEVNet

Datasets

Datasets should be put inside data/. For example, data/semantic_kitti_4class_100x100.

Training

BEVNet-S

Example:

cd experiments
bash train_kitti4-unknown_single.sh kitti4_100/single/include_unknown/default.yaml 
   
     arg1 arg2 ...

   

Logs and model weights will be stored in a subdirectory of the config file like this: experiments/kitti4_100/single/include_unknown/default- -logs/

  • is useful when you want to use the same config file but different hyperparameters. For example, if you want to do some debugging you can use set to debug.
  • arg1 arg2 ... are command line arguments supported by train_single.py. For example, you can pass --batch_size=4 --log_interval=100, etc.

BEVNet-R

The command line formats are the same as BEVNet-S Example:

cd experiments
bash train_kitti4-unknown_recurrent.sh kitti4_100/recurrent/include_unknown/default.yaml 
   
     \
--n_frame=6 --seq_len=20 --frame_strides 1 10 20 \
--resume kitti4_100/single/include_unknown/default-logs/model.pth.4 \
--resume_epoch 0

   

Logs and model weights will be stored in a subdirectory of the config file experiments/kitti4_100/recurrent/include_unknown/default- -logs/ .

You might also like...
[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving
[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving

NEAT: Neural Attention Fields for End-to-End Autonomous Driving Paper | Supplementary | Video | Poster | Blog This repository is for the ICCV 2021 pap

This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.
This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.

Self Driving Car An autonomous car (also known as a driverless car, self-driving car, and robotic car) is a vehicle that is capable of sensing its env

Self-Supervised Pillar Motion Learning for Autonomous Driving (CVPR 2021)

Self-Supervised Pillar Motion Learning for Autonomous Driving Chenxu Luo, Xiaodong Yang, Alan Yuille Self-Supervised Pillar Motion Learning for Autono

An unofficial personal implementation of UM-Adapt, specifically to tackle joint estimation of panoptic segmentation and depth prediction for autonomous driving datasets.

Semisupervised Multitask Learning This repository is an unofficial and slightly modified implementation of UM-Adapt[1] using PyTorch. This code primar

Unofficial PyTorch implementation of
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)

RTM3D-PyTorch The PyTorch Implementation of the paper: RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving (ECCV 2020

Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving

GSAN Introduction Code for paper GSAN: Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving, wh

The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022
The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022

DG-TrajGen The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022. Our Meth

Siamese-nn-semantic-text-similarity - A repository containing comprehensive Neural Networks based PyTorch implementations for the semantic text similarity task A public available dataset for road boundary detection in aerial images
A public available dataset for road boundary detection in aerial images

Topo-boundary This is the official github repo of paper Topo-boundary: A Benchmark Dataset on Topological Road-boundary Detection Using Aerial Images

Comments
  • Size mismatch when running pre-trained model

    Size mismatch when running pre-trained model

    Hi guys,

    Very interesting paper. I was looking to duplicate your results and test your pre-trained model with some of my own point cloud data. I am having a pretty difficult time getting the example running. The main issues where with spconv versions (I assumed you guys used V1 since V2 has changed VoxelGenerator to PointToVoxel) but I solved those.

    When loading the pre-trained weights using the information under 'Running the pretrained models' I am getting the following error.

    Traceback (most recent call last): File "test_recurrent.py", line 28, in <module> model = BEVNetRecurrent(MODEL_FILE) File "/home/franc/Semantic_BEVNet/semantic_bevnet/bevnet/inference.py", line 97, in __init__ super(BEVNetRecurrent, self).__init__(*args, **kwargs) File "/home/franc/Semantic_BEVNet/semantic_bevnet/bevnet/inference.py", line 31, in __init__ self._load(weights_file, device) File "/home/franc/Semantic_BEVNet/semantic_bevnet/bevnet/inference.py", line 53, in _load net.load_state_dict(state_dict['nets'][name]) File "/home/franc/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1223, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for SpMiddleNoDownsampleXYMultiStep: size mismatch for middle_conv.0.weight: copying a param with shape torch.Size([3, 3, 3, 4, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 4]). size mismatch for middle_conv.3.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]). size mismatch for middle_conv.6.weight: copying a param with shape torch.Size([3, 3, 3, 32, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 32]). size mismatch for middle_conv.9.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for middle_conv.12.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for middle_conv.15.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for middle_conv.18.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for middle_conv.21.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for middle_conv.24.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for middle_conv.27.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for middle_conv.30.weight: copying a param with shape torch.Size([3, 1, 1, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 1, 1, 64]).

    It looks like the model is all there but is in the wrong shape. Wondering if there is a quick fix for this? I havent modified the cloned repo at all so I am trying to find the reason the example won't run as outlined.

    Thanks in advance.

    opened by TankyFranky 0
  • inference

    inference

    Dear author:

          Hello, could you upload the model weight and give instructions on inference please? I want to to test it on my campus dataset.  Thank you very much!
    

    Best Regards.

    opened by rock19970106 1
Owner
(Brian) JoonHo Lee
5th year MS student at University of Washington. Interested in Robotics, Deep Learning, Kaggle.
(Brian) JoonHo Lee
PyTorch implementation of Memory-based semantic segmentation for off-road unstructured natural environments.

MemSeg: Memory-based semantic segmentation for off-road unstructured natural environments Introduction This repository is a PyTorch implementation of

null 11 Nov 28, 2022
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

null 308 Jan 4, 2023
Repository to run object detection on a model trained on an autonomous driving dataset.

Autonomous Driving Object Detection on the Raspberry Pi 4 Description of Repository This repository contains code and instructions to configure the ne

Ethan 51 Nov 17, 2022
Quadruped-command-tracking-controller - Quadruped command tracking controller (flat terrain)

Quadruped command tracking controller (flat terrain) Prepare Install RAISIM link

Yunho Kim 4 Oct 20, 2022
Approaches to modeling terrain and maps in python

topography ?? Contains different approaches to modeling terrain and topographic-style maps in python Features Inverse Distance Weighting (IDW) A given

John Gutierrez 1 Aug 10, 2022
Official Repo for Ground-aware Monocular 3D Object Detection for Autonomous Driving

Visual 3D Detection Package: This repo aims to provide flexible and reproducible visual 3D detection on KITTI dataset. We expect scripts starting from

Yuxuan Liu 305 Dec 19, 2022
RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving

RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving (AAAI2021). RTS3D is efficiency and accuracy s

null 71 Nov 29, 2022
[arXiv] What-If Motion Prediction for Autonomous Driving ❓🚗💨

WIMP - What If Motion Predictor Reference PyTorch Implementation for What If Motion Prediction [PDF] [Dynamic Visualizations] Setup Requirements The W

William Qi 96 Dec 29, 2022
[CVPR'21] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving

TransFuser This repository contains the code for the CVPR 2021 paper Multi-Modal Fusion Transformer for End-to-End Autonomous Driving. If you find our

null 695 Jan 5, 2023
One Million Scenes for Autonomous Driving

ONCE Benchmark This is a reproduced benchmark for 3D object detection on the ONCE (One Million Scenes) dataset. The code is mainly based on OpenPCDet.

null 148 Dec 28, 2022