Colar: Effective and Efficient Online Action Detection by Consulting Exemplars, CVPR 2022.

Overview

Colar: Effective and Efficient Online Action Detection by Consulting Exemplars

This repository is the official implementation of Colar. In this work, we study the online action detection and develop an effective and efficient exemplar-consultation mechanism. Paper from arXiv.

Illustrating the architecture of the proposed I2Sim

Requirements

To install requirements:

conda env create -n env_name -f environment.yaml

Before running the code, please activate this conda environment.

Data Preparation

a. Download pre-extracted features from baiduyun (code:cola)

Please ensure the data structure is as below

├── data
   └── thumos14
       ├── Exemplar_Kinetics
       ├── thumos_all_feature_test_Kinetics.pickle
       ├── thumos_all_feature_val_Kinetics.pickle
       ├── thumos_test_anno.pickle
       ├── thumos_val_anno.pickle
       ├── data_info.json

Train

a. Config

Adjust configurations according to your machine.

./misc/init.py

c. Train

python main.py

Inference

a. You can download pre-trained models from baiduyun (code:cola), and put the weight file in the folder checkpoint.

  • The performance of our model is 66.9% mAP.

b. Test

python inference.py

Citation

@inproceedings{yang2022colar,
  title={Colar: Effective and Efficient Online Action Detection by Consulting Exemplars},
  author={Yang, Le and Han, Junwei and Zhang, Dingwen},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2022}
}

Related Projects

  • BackTAL: Background-Click Supervision for Temporal Action Localization.

Contact

For any discussions, please contact [email protected].

Comments
  • About exemplars of the static branch

    About exemplars of the static branch

    Thank you for your excellent work! I would like to ask a question which is how can we get the exemplars of all categories, it seems like not be implemented in the codes. Thank you!

    opened by ohheysherry66 4
  • HDD dataset  visual features

    HDD dataset visual features

    Hi @VividLe,

    I have a honda driving dataset that has images (3fps) and I would like to use only the RGB features or visual features of the honda dataset. How would it be possible? I couldn't find the optical flow feature and am not sure how to get the dataset. My task is to reconstruct one modality from the other one using a share multimodal representation (sensor and videos(images frames)

    Thanks

    opened by pgupta119 2
  • the corresponding paper

    the corresponding paper

    Hi,I am searching the code of the paper ”Structured Attention Composition for Temporal Action Localization“ ,and the url provided by arXiv is linked to this project. But it seems no relation between the code and the paper.

    opened by WalterWangRevo 1
  • Some Details of The Paper

    Some Details of The Paper

    Hi, Thank you for sharing the code. When I was reading your papers, I met some problems~ In Tabel 4, I found that you write that given a one-minute video, the running time of extracting the RGB feature is 2.3 seconds; I wonder how you calculate the time? If it was the time cost by extracting feature that used Res-200?

    opened by Echo0125 1
  • The question about model precision

    The question about model precision

    I trained your model on my device with THUMOS dataset without any modification, but the mAP is only 65.39. Since there is a gap between this result and the validation result (66.91) of the model you posted, I wonder if you are using some tricks. If not, can it be understood that the gap is only due to equipment differences.

    [Epoch-7] [IDU-kinetics] mAP: 0.6539 BaseballPitch: 0.4485 BasketballDunk: 0.8277 Billiards: 0.2734 CleanAndJerk: 0.7331 CliffDiving: 0.8972 CricketBowling: 0.4617 CricketShot: 0.3120 Diving: 0.8743 FrisbeeCatch: 0.4104 GolfSwing: 0.7853 HammerThrow: 0.8585 HighJump: 0.7666 JavelinThrow: 0.7920 LongJump: 0.8093 PoleVault: 0.9041 Shotput: 0.6848 SoccerPenalty: 0.4923 TennisSwing: 0.6288 ThrowDiscus: 0.6688 VolleyballSpiking: 0.4487

    opened by wss-xjtu 5
Owner
LeYang
LeYang
Official implementation of ACTION-Net: Multipath Excitation for Action Recognition (CVPR'21).

ACTION-Net Official implementation of ACTION-Net: Multipath Excitation for Action Recognition (CVPR'21). Getting Started EgoGesture data folder struct

V-Sense 171 Dec 26, 2022
[CVPR 2022] CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation

CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation Prerequisite Please create and activate the following conda envrionment. To r

Qin Wang 87 Jan 8, 2023
PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short-Term Transformer for Online Action Detection".

Long Short-Term Transformer for Online Action Detection Introduction This is a PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short

null 77 Dec 16, 2022
CVPR 2022 "Online Convolutional Re-parameterization"

OREPA: Online Convolutional Re-parameterization This repo is the PyTorch implementation of our paper to appear in CVPR2022 on "Online Convolutional Re

Mu Hu 121 Dec 21, 2022
Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation", Haoxiang Wang, Han Zhao, Bo Li.

Bridging Multi-Task Learning and Meta-Learning Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Trainin

AI Secure 57 Dec 15, 2022
An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow implementation of SERank model. The code is developed based on TF-Ranking.

SERank An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow

Zhihu 44 Oct 20, 2022
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

Microsoft 8.4k Jan 1, 2023
A deep learning library that makes face recognition efficient and effective

Distributed Arcface Training in Pytorch This is a deep learning library that makes face recognition efficient, and effective, which can train tens of

Sajjad Aemmi 10 Nov 23, 2021
ViDT: An Efficient and Effective Fully Transformer-based Object Detector

ViDT: An Efficient and Effective Fully Transformer-based Object Detector by Hwanjun Song1, Deqing Sun2, Sanghyuk Chun1, Varun Jampani2, Dongyoon Han1,

NAVER AI 262 Dec 27, 2022
Allows including an action inside another action (by preprocessing the Yaml file). This is how composite actions should have worked.

actions-includes Allows including an action inside another action (by preprocessing the Yaml file). Instead of using uses or run in your action step,

Tim Ansell 70 Nov 4, 2022
Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal Action Localization' (ICCV-21 Oral)

Learning-Action-Completeness-from-Points Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal A

Pilhyeon Lee 67 Jan 3, 2023
Human Action Controller - A human action controller running on different platforms.

Human Action Controller (HAC) Goal A human action controller running on different platforms. Fun Easy-to-use Accurate Anywhere Fun Examples Mouse Cont

null 27 Jul 20, 2022
The official TensorFlow implementation of the paper Action Transformer: A Self-Attention Model for Short-Time Pose-Based Human Action Recognition

Action Transformer A Self-Attention Model for Short-Time Human Action Recognition This repository contains the official TensorFlow implementation of t

PIC4SeRCentre 20 Jan 3, 2023
"MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction" (CVPRW 2022) & (Winner of NTIRE 2022 Challenge on Spectral Reconstruction from RGB)

MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction (CVPRW 2022) Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Z

Yuanhao Cai 274 Jan 5, 2023
Imposter-detector-2022 - HackED 2022 Team 3IQ - 2022 Imposter Detector

HackED 2022 Team 3IQ - 2022 Imposter Detector By Aneeljyot Alagh, Curtis Kan, Jo

Joshua Ji 3 Aug 20, 2022
The 7th edition of NTIRE: New Trends in Image Restoration and Enhancement workshop will be held on June 2022 in conjunction with CVPR 2022.

NTIRE 2022 - Image Inpainting Challenge Important dates 2022.02.01: Release of train data (input and output images) and validation data (only input) 2

Andrés Romero 37 Nov 27, 2022
Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral)

Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral) This is the official implementat

Yifan Zhang 259 Dec 25, 2022
Temporally Efficient Vision Transformer for Video Instance Segmentation, CVPR 2022, Oral

Temporally Efficient Vision Transformer for Video Instance Segmentation Temporally Efficient Vision Transformer for Video Instance Segmentation (CVPR

Hust Visual Learning Team 203 Dec 31, 2022
A Pytorch implementation of CVPR 2021 paper "RSG: A Simple but Effective Module for Learning Imbalanced Datasets"

RSG: A Simple but Effective Module for Learning Imbalanced Datasets (CVPR 2021) A Pytorch implementation of our CVPR 2021 paper "RSG: A Simple but Eff

null 120 Dec 12, 2022