Predicting future trajectories of people in cameras of novel scenarios and views.

Overview

Pedestrian Trajectory Prediction

Predicting future trajectories of pedestrians in cameras of novel scenarios and views.

This repository contains the code and models for the following ECCV'20 paper:

SimAug: Learning Robust Representatios from Simulation for Trajectory Prediction Junwei Liang, Lu Jiang, Alexander Hauptmann

Our Pipeline

Input: could be from a streaming camera or saved videos.

Detection: we used a pre-trained model called YOLO (You Only Look Once) to perform object detection, it uses convolutional neural networks to provide real-time object detection, it is popular for its speed and accuracy.

Tracking: we used a pre-trained model called Deep SORT (Simple Online and Realtime Tracking), it uses deep learning to perform object tracking in videos. It works by computing deep features for every bounding box and using the similarity between deep features to also factor into the tracking logic. It is known to work perfectly with YOLO and also popular for its speed and accuracy.

Resizing: at this step, we get the frames and resize them to the required shape which is 1920 X 1080.

Semantic Segmentation: we used a pre-trained model called Deep Lab (Deep Labeling) an algorithm made by Google, to perform the semantic segmentation task, this model works by assigning a predicted value for each pixel in an image or video with the help of deep neural network support. It performs a pixel-wise classification where each pixel is labeled by predicted value encoding its semantic class.

SimAug Model: Simulation as Augmentation, is a novel simulation data augmentation method for trajectory prediction. It augments the representation such that it is robust to the variances in semantic scenes and camera views, it predicts the trajectory in unseen camera views.

Predicted Trajectory: The output of the proposed pipeline.

Code

Fisrt you need to install packages according to the configuration file:

$ pip install -r requirements.txt

Running on video

Then download the deeplab ADE20k model(used for Semantic Segmentation):

$ wget http://download.tensorflow.org/models/deeplabv3_xception_ade20k_train_2018_05_29.tar.gz
$ tar -zxvf deeplabv3_xception_ade20k_train_2018_05_29.tar.gz

Then download SimAug-trained model:

$ wget https://next.cs.cmu.edu/data/packed_models_eccv2020.tgz
$ tar -zxvf packed_models_eccv2020.tgz

Run the pretrained YOLOv5 & DEEPSORT

get the annotations on a sample video many_people.mp4 from yolo and deepsort + resized images to 1920 x 1080

dataset_resize,changelst , annotation = detect('many_people.mp4')

Prepare the annotation

  • get box centre x,y for each person (traj_data)
  • person_box_data : boxes coordinates for all persons
  • other_box_data : boxes of other objects in the same frame with each targeted person
traj_data, person_box_data, other_box_data  = prepared_data_sdd(annotation,changelst)

Run the segmentation model

model_path= 'deeplabv3_xception_ade20k_train/frozen_inference_graph.pb'
seg_output= extract_scene_seg(dataset_resize,model_path,every =100)

Prepare all data for the SimAug model

making npz which contanins arrays for details of the segmentation with annotations and person ids
data=To_npz(8,12,traj_data,seg_output)
np.savez("prepro_fold1/data_test.npz", **data)

Test SimAug-Trained Model

!python Code/test.py prepro_fold1/ packed_models/ best_simaug_model \
--wd 0.001 --runId 0 --obs_len 8 --pred_len 12 --emb_size 32 --enc_hidden_size 256 \
--dec_hidden_size 256 --activation_func tanh --keep_prob 1.0 --num_epochs 30 \
--batch_size 12 --init_lr 0.3 --use_gnn --learning_rate_decay 0.95 --num_epoch_per_decay 5.0 \
--grid_loss_weight 1.0 --grid_reg_loss_weight 0.5 --save_period 3000 \
--scene_h 36 --scene_w 64 --scene_conv_kernel 3 --scene_conv_dim 64 \
--scene_grid_strides 2,4 --use_grids 1,0 --val_grid_num 0 --gpuid 0 --load_best \
--save_output sdd_out.p
To Run the pipeline from here

Demo

ITI.Moving.vehicle.mp4

Results

We capture streaming video that contains 1628 frames, processing time for stages is

• Yolo & Deep SORT: 20.7 f/s

• DeepLabv3: 4.66 f/s

• SimAug: 12.8 f/s

Video_Name Grid_acc minADE minFDE
Moving-ITI 0.6098 22.132 39.271

Dependencies

• Python 3.6 ; TensorFlow 1.15.0 ; Pytorch 1.7 ; Cuda 10

Code Contributors

References

@inproceedings{liang2020simaug,
  title={SimAug: Learning Robust Representations from Simulation for Trajectory Prediction},
  author={Liang, Junwei and Jiang, Lu and Hauptmann, Alexander},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  month = {August},
  year={2020}
}
You might also like...
Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities
Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities

ORB-SLAM2 Authors: Raul Mur-Artal, Juan D. Tardos, J. M. M. Montiel and Dorian Galvez-Lopez (DBoW2) 13 Jan 2017: OpenCV 3 and Eigen 3.3 are now suppor

TrackTech: Real-time tracking of subjects and objects on multiple cameras

TrackTech: Real-time tracking of subjects and objects on multiple cameras This project is part of the 2021 spring bachelor final project of the Bachel

 Frigate - NVR With Realtime Object Detection for IP Cameras
Frigate - NVR With Realtime Object Detection for IP Cameras

A complete and local NVR designed for HomeAssistant with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.

BabelCalib: A Universal Approach to Calibrating Central Cameras. In ICCV (2021)
BabelCalib: A Universal Approach to Calibrating Central Cameras. In ICCV (2021)

BabelCalib: A Universal Approach to Calibrating Central Cameras This repository contains the MATLAB implementation of the BabelCalib calibration frame

Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras (ICCV 2021)
Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras (ICCV 2021)

N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Gra

Blender add-on: Add to Cameras menu: View → Camera, View → Add Camera, Camera → View, Previous Camera, Next Camera
Blender add-on: Add to Cameras menu: View → Camera, View → Add Camera, Camera → View, Previous Camera, Next Camera

Blender add-on: Camera additions In 3D view, it adds these actions to the View|Cameras menu: View → Camera : set the current camera to the 3D view Vie

Camera-caps - Examine the camera capabilities for V4l2 cameras
Camera-caps - Examine the camera capabilities for V4l2 cameras

camera-caps This is a graphical user interface over the v4l2-ctl command line to

Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation
Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation

Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation Official PyTorch implementation for the paper Look

Code for the paper Learning the Predictability of the Future

Learning the Predictability of the Future Code from the paper Learning the Predictability of the Future. Website of the project in hyperfuture.cs.colu

Owner
null
PyMove is a Python library to simplify queries and visualization of trajectories and other spatial-temporal data

Use PyMove and go much further Information Package Status License Python Version Platforms Build Status PyPi version PyPi Downloads Conda version Cond

Insight Data Science Lab 64 Nov 15, 2022
We utilize deep reinforcement learning to obtain favorable trajectories for visual-inertial system calibration.

Unified Data Collection for Visual-Inertial Calibration via Deep Reinforcement Learning Update: The lastest code will be updated in this branch. Pleas

ETHZ ASL 27 Dec 29, 2022
A super lightweight Lagrangian model for calculating millions of trajectories using ERA5 data

Easy-ERA5-Trck Easy-ERA5-Trck Galleries Install Usage Repository Structure Module Files Version iteration Easy-ERA5-Trck is a super lightweight Lagran

Zhenning Li 26 Nov 19, 2022
Official code for our CVPR '22 paper "Dataset Distillation by Matching Training Trajectories"

Dataset Distillation by Matching Training Trajectories Project Page | Paper This repo contains code for training expert trajectories and distilling sy

George Cazenavette 256 Jan 5, 2023
Breaching - Breaching privacy in federated learning scenarios for vision and text

Breaching - A Framework for Attacks against Privacy in Federated Learning This P

Jonas Geiping 139 Jan 3, 2023
An open-source, low-cost, image-based weed detection device for fallow scenarios.

Welcome to the OpenWeedLocator (OWL) project, an opensource hardware and software green-on-brown weed detector that uses entirely off-the-shelf compon

Guy Coleman 145 Jan 5, 2023
MetaDrive: Composing Diverse Scenarios for Generalizable Reinforcement Learning

MetaDrive: Composing Diverse Driving Scenarios for Generalizable RL [ Documentation | Demo Video ] MetaDrive is a driving simulator with the following

DeciForce: Crossroads of Machine Perception and Autonomy 276 Jan 4, 2023
TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured Scenarios

TPH-YOLOv5 This repo is the implementation of "TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured

cv516Buaa 439 Dec 22, 2022
CRISCE: Automatically Generating Critical Driving Scenarios From Car Accident Sketches

CRISCE: Automatically Generating Critical Driving Scenarios From Car Accident Sketches This document describes how to install and use CRISCE (CRItical

Chair of Software Engineering II, Uni Passau 2 Feb 9, 2022