Model-free Vehicle Tracking and State Estimation in Point Cloud Sequences

Overview

Model-free Vehicle Tracking and State Estimation in Point Cloud Sequences

1. Introduction

This project is for paper Model-free Vehicle Tracking and State Estimation in Point Cloud Sequences. It concerns the single object tracking (SOT) of objects in point cloud sequences.

The input to the algorithm is the starting location (in the form of a 3D bounding box) of an object and the point cloud sequences for the scene. Our tracker then (1) provides the bounding box on each subsequent point cloud frame, (2) gets the dense shapes by aggregating the point clouds along with tracking.We also explore the usages on other applications, such as simulating LiDAR scans for data augmentation.

Please check our youtube video below for a 1-minute demonstration, and this link to the bilibili version. Youtube Video for Our Project

This README file describes the most basic usages of our code base. For more details, please refer to:

  • Data Preprocessing: It describes how to convert the raw data in Waymo dataset into more handy forms, which can be used by our algorithms.
  • Benchmark: It explains the selection of tracklets and construction of our benchmark. Note that the benchmark information is already in the ./benchmark/ and you may directly use it. The code in this part is for the purpose of verification.
  • Design: This documentation explains our design for the implementation. Reading this would be useful for understanding our tracker implementation and modifying it for your own purpose.
  • Model Configs: We use the config.yaml to specify the behaviour of the tracker. Please refer to this documentation for detailed explanation.
  • Toolkit: Along this with project, we also provide several code snippets for visualizing the tracking results. This file discusses these toolkits we have created.

2. SOT API and Inference

2.1 Installation

Our code has been thoroughly tested using the environment of python=3.6. For more detailed dependencies, please refer to the Environment section below.

We wrap the usages of our code into a library sot_3d, and the users may install it via the following command. The advantage of this installation command is that the behaviors of sot_3d will keep synchronized with your modifications.

pip install -e ./

2.2 Tracking API

The main API tracker_api is in main.py. In the default case, it takes the model configuration, the beginning bounding box, and a data loader as input, output the tracking result as specified below. Some additional guidelines on this API are:

  • data_loader is an iterator reading the data. On each iteration, it returns a dictionary, with the keys pc (point cloud) and ego (the transformation matrix to the world coordinate) as compulsory. An example of data_loader is in example_loader.
  • When you want to compare the tracking results with the ground truth along with tracking, please provide the input argument gts and import the function compare_to_gt, the data type sot_3d.data_protos.BBox . The gts are a list of BBox.
  • We also provide a handy tool for visualization. Please import from sot_3d.visualization import Visualizer2D and frame_result_visualization for a frame-level BEV visualization.
import sot_3d
from sot_3d.data_protos import BBox
from sot_3d.visualization import Visualizer2D


def tracker_api(configs, id, start_bbox, start_frame, data_loader, track_len, gts=None, visualize=False):
""" 
    Args:
        configs: model configuration read from config.yaml
        id (str): each tracklet has an id
        start_bbox ([x, y, z, yaw, l, w, h]): the beginning location of this id
        data_loader (an iterator): iterator returning data of each incoming frame
        track_len: number of frames in the tracklet
    Return:
        {
            frame_number0: {'bbox0': previous frame result, 'bbox1': current frame result, 'motion': estimated motion}
            frame_number1: ...
            ...
            frame_numberN: ...
        }
"""

2.3 Evaluation API

The API for evaluation is in evaluation/evaluation.py. tracklet_acc and tracklet_rob compute the accuracy and robustness given the ious in a tracklet, and metrics_from_bboxes deals with the cases when the inputs are raw bounding boxes. Note that the bounding boxes are in the format of sot_3d.data_protos.BBox.

def tracklet_acc(ious):
    ...
    """ the accuracy for a tracklet
    """

def tracklet_rob(ious, thresholds):
    ...
    """ compute the robustness of a tracklet
    """

def metrics_from_bboxes(pred_bboxes, gts):
    ...
    """ Compute the accuracy and robustness of a tracklet
    Args:
        pred_bboxes (list of BBox)
        gts (list of BBox)
    Return:
        accuracy, robustness, length of tracklet
    """

3 Building Up the Benchmark

Our LiDAR-SOT benchmark selects 1172 tracklets from the validation set of Waymo Open Dataset. These tracklets satisfy the requirements of mobility, length, and meaningful initialization.

The information of selected tracklets is in the ./benchmark/. Each json file stores the ids, segment names, and the frame intervals for each selected tracklet. For replicating the construction of this benchmark, please refer to this documentation.

4. Steps for Inference/Evaluation on the Benchmark

4.1 Data Preparation

Please follow the guidelines in Data Preprocessing. Suppose your root directory is DATA_ROOT.

4.2 Running on the benchmark

The command for running on the inference is as follows. Note that there are also some other arguments, please refer to the main.py for more details.

python main.py \
    --name NAME \                         # The NAME for your experiment.
    --bench_list your_tracklet_list \     # The path for your benchmark tracklets. By default at ./benchmark/bench_list.json.
    --data_folder DATA_ROOT \             # The location to store your datasets.
    --result_folder result_folder \       # Where you store the results of each tracklet.
    --process process_number \            # Use mutiple processes to split the dataset and accelerate inference.

After this, you may access the result for tracklet ID as demonstrated below. Inside the json files, bbox0 and bbox1 indicates the estimated bounding boxes in frame frame_index - 1 and frame_index.

-- result_folder
   -- NAME
       -- summary
           -- ID.json
               {
                   frame_index0: {'bbox0': ..., 'bbox1': ..., 'motion': ..., 
                                  'gt_bbox0': ..., 'gt_bbox1': ..., 'gt_motion': ..., 
                                  'iou2d': ..., 'iou3d': ...}
                   frame_index1: ...
                   frame_indexN: ...
               }

4.3 Evaluation

For computing the accuracy and robustness of tracklets, use the following code:

cd evaluation
python evaluation.py \
    --name NAME \                                 # the name of the experiment
    --result_folder result_folder \               # result folder
    --data_folder DATA_ROOT \                     # root directory storing the dataset
    --bench_list_folder benchmark_list_folder \   # directory for benchmark tracklet information, by default the ./benchmark/
    --iou                                         # use this if already computes the iou during inference
    --process process_number                      # use multiprocessing to accelerate the evaluation, especially in cases of computing iou

For the evaluation of shapes, use the following code:

cd evaluation
python evaluation.py \
    --name NAME \                                 # the name of the experiment
    --result_folder result_folder \               # result folder
    --data_folder DATA_ROOT \                     # root directory storing the dataset
    --bench_list_folder benchmark_list_folder \   # directory for benchmark tracklet information, by default the ./benchmark/
    --process process_number                      # Use mutiple processes to split the dataset and accelerate evaluation.

5. Environment

This repository has been tested and run using python=3.6.

For inference on the dataset using our tracker, the following libraries are compulsory:

numpy, scikit-learn, numba, scipy

If the evaluation with ground-truth is involved, please install the shapely library for the computation of iou.

shapely (for iou computation)

The data preprocessing on Waymo needs.

waymo_open_dataset

Our visualization toolkit needs.

matplotlib, open3d, pangolin

6. Citation

If you find our paper or repository useful, please consider citing

@article{pang2021model,
    title={Model-free Vehicle Tracking and State Estimation in Point Cloud Sequences},
    author={Pang, Ziqi and Li, Zhichao and Wang, Naiyan},
    journal={arXiv preprint arXiv:2103.06028},
    year={2021}
}
You might also like...
GndNet: Fast ground plane estimation and point cloud segmentation for autonomous vehicles using deep neural networks.
GndNet: Fast ground plane estimation and point cloud segmentation for autonomous vehicles using deep neural networks.

GndNet: Fast Ground plane Estimation and Point Cloud Segmentation for Autonomous Vehicles. Authors: Anshul Paigwar, Ozgur Erkent, David Sierra Gonzale

Code for
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Official project website for the CVPR 2021 paper
Official project website for the CVPR 2021 paper "Exploring intermediate representation for monocular vehicle pose estimation"

EgoNet Official project website for the CVPR 2021 paper "Exploring intermediate representation for monocular vehicle pose estimation". This repo inclu

Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Joint detection and tracking model named DEFT, or ``Detection Embeddings for Tracking.
Joint detection and tracking model named DEFT, or ``Detection Embeddings for Tracking.

DEFT: Detection Embeddings for Tracking DEFT: Detection Embeddings for Tracking, Mohamed Chaabane, Peter Zhang, J. Ross Beveridge, Stephen O'Hara

Implementation of the state of the art beat-detection, downbeat-detection and tempo-estimation model

The ISMIR 2020 Beat Detection, Downbeat Detection and Tempo Estimation Model Implementation. This is an implementation in TensorFlow to implement the

Official implementation of the network presented in the paper
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"

M4Depth This is the reference TensorFlow implementation for training and testing depth estimation models using the method described in M4Depth: A moti

A new GCN model for Point Cloud Analyse

Pytorch Implementation of PointNet and PointNet++ This repo is implementation for VA-GCN in pytorch. Classification (ModelNet10/40) Data Preparation D

This repository allows the user to automatically scale a 3D model/mesh/point cloud on Agisoft Metashape

Metashape-Utils This repository allows the user to automatically scale a 3D model/mesh/point cloud on Agisoft Metashape, given a set of 2D coordinates

Comments
  • One question about ICP Loss and Shape loss

    One question about ICP Loss and Shape loss

    @ziqipang Hello, thank you very much for sharing the open source code and your article. After reading your paper carefully, I have a question that is very curious. Is there any difference between shape_loss in L2 loss and icp_loss , I think the two loss seems to be the same. I would appreciate it if you could answer this question.

    opened by Joeless 2
  • Questions about training set

    Questions about training set

    Thank you for sharing the code.

    I am not sure whether I understand correctly that you only specify the test dataset (in the ./benchmark/ directory) but not the training set (because you don't need training)?

    Another question is that how can I test other algorithms on the benchmark? Do you plan to provide the releated APIs?

    opened by JunrQ 2
  • How do I use this package with a 16-line LIDAR?

    How do I use this package with a 16-line LIDAR?

    Hi, Thank you for your great work.

    I have a VLP-16 lidar sensor and I want to use it to detect pedestrian in ROS. What should I do to take my lidar data to use this package?

    I would be grateful if you could give me some suggestions to achieve it.

    Best wishes, Yicheng

    opened by yicheng6o6 1
  • One question about evaluation metrics

    One question about evaluation metrics

    Hello, thank you very much for sharing the open source code and your article. After reading your paper carefully, I have a question that is very curious. For evaluation metrics, is there any difference between Accuracy proposed in your paper and the Success adopted in "Leveraging Shape Completion for 3D Siamese Tracking" paper? I think the two metrics seems to be the same. I would appreciate it if you could answer this question.

    opened by StiphyJay 1
Owner
TuSimple
The Future of Trucking
TuSimple
Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

null 75 Nov 24, 2022
Codes for TIM2021 paper "Anchor-Based Spatio-Temporal Attention 3-D Convolutional Networks for Dynamic 3-D Point Cloud Sequences"

Codes for TIM2021 paper "Anchor-Based Spatio-Temporal Attention 3-D Convolutional Networks for Dynamic 3-D Point Cloud Sequences"

Intelligent Robotics and Machine Vision Lab 4 Jul 19, 2022
Vehicle direction identification consists of three module detection , tracking and direction recognization.

Vehicle-direction-identification Vehicle direction identification consists of three module detection , tracking and direction recognization. Algorithm

null 5 Nov 15, 2022
Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)

Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021) An efficient PyTorch library for Point Cloud Completion.

Microsoft 119 Jan 2, 2023
Implementation of the "Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos" paper.

Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos Introduction Point cloud videos exhibit irregularities and lack of or

Hehe Fan 101 Dec 29, 2022
Synthetic LiDAR sequential point cloud dataset with point-wise annotations

SynLiDAR dataset: Learning From Synthetic LiDAR Sequential Point Cloud This is official repository of the SynLiDAR dataset. For technical details, ple

null 78 Dec 27, 2022
[ICCV 2021 Oral] SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer

This repository contains the source code for the paper SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer (ICCV 2021 Oral). The project page is here.

AllenXiang 65 Dec 26, 2022
City-Scale Multi-Camera Vehicle Tracking Guided by Crossroad Zones Code

City-Scale Multi-Camera Vehicle Tracking Guided by Crossroad Zones Requirements Python 3.8 or later with all requirements.txt dependencies installed,

null 88 Dec 12, 2022
OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.

OpenFace 2.2.0: a facial behavior analysis toolkit Over the past few years, there has been an increased interest in automatic facial behavior analysis

Tadas Baltrusaitis 5.8k Dec 31, 2022