Alleviating Over-segmentation Errors by Detecting Action Boundaries

Overview

Alleviating Over-segmentation Errors by Detecting Action Boundaries

Forked from ASRF offical code. This repo is the a implementation of replacing original MSTCN backbone with ASFormer.

Dataset

GTEA, 50Salads, Breakfast

You can download features and G.T. of these datasets from this repository.
Or you can extract their features by yourself using this repository

Requirements

  • Python >= 3.7
  • pytorch => 1.0
  • torchvision
  • pandas
  • numpy
  • Pillow
  • PyYAML

You can download packages using requirements.txt.

pip install -r requirements.txt

Directory Structure

root ── csv/
      ├─ libs/
      ├─ imgs/
      ├─ result/
      ├─ utils/
      ├─ dataset ─── 50salads/...
      │           ├─ breakfast/...
      │           └─ gtea ─── features/
      │                    ├─ groundTruth/
      │                    ├─ splits/
      │                    └─ mapping.txt
      ├.gitignore
      ├ README.md
      ├ requirements.txt
      ├ save_pred.py
      ├ train.py
      └ evaluate.py
  • csv directory contains csv files which are necessary for training and testing.
  • An image in imgs is one from PascalVOC. This is used for an color palette to visualize outputs.
  • Experimental results are stored in results directory.
  • Scripts in utils are directly irrelevant with train.py and evaluate.py but necessary for converting labels, generating configurations, visualization and so on.
  • Scripts in libs are necessary for training and evaluation. e.g.) models, loss functions, dataset class and so on.
  • The datasets downloaded from this repository are stored in dataset. You can put them in another directory, but need to specify the path in configuration files.
  • train.py is a script for training networks.
  • eval.py is a script for evaluation.
  • save_pred.py is for saving predictions from models.

How to use

Please also check scripts/experiment.sh, which runs all the following experimental codes.

  1. First of all, please download features and G.T. of these datasets from this repository.

  2. Features and groundTruth labels need to be converted to numpy array. This repository does not provide boundary groundtruth labels, so you have to generate them, too. Please run the following command. [DATASET_DIR] is the path to your dataset directory.

    python utils/generate_gt_array.py --dataset_dir [DATASET_DIR]
    python utils/generate_boundary_array.py --dataset_dir [DATASET_DIR]
  3. In this implementation, csv files are used for keeping information of training or test data. You can run the below command to generate csv files, but we suggest to use the csv files provided in the repo.

    python utils/make_csv_files.py --dataset_dir [DATASET_DIR]
  4. You can automatically generate experiment configuration files by running the following command. This command generates directories and configuration files in root_dir. However, we suggest to use the config files provided in the repo.

    python utils/make_config.py --root_dir ./result/50salads --dataset 50salads --split 1 2 3 4 5
    python utils/make_config.py --root_dir ./result/gtea --dataset gtea --split 1 2 3 4
    python utils/make_config.py --root_dir ./result/breakfast --dataset breakfast --split 1 2 3 4

    If you want to add other configurations, please add command-line options like:

    python utils/make_config.py --root_dir ./result/50salads --dataset 50salads --split 1 2 3 4 5 --learning_rate 0.1 0.01 0.001 0.0001

    Please see libs/config.py about configurations.

  5. You can train and evaluate models specifying a configuration file generated in the above process like, we train 80 epochs for 50salads dataset in the config.yaml.

    python train.py ./result/50salads/dataset-50salads_split-1/config.yaml
    python evaluate.py ./result/50salads/dataset-50salads_split-1/config.yaml test
  6. You can also save model predictions as numpy array by running:

    python save_pred.py ./result/50salads/dataset-50salads_split-1/config.yaml test
  7. If you want to visualize the saved model predictions, please run:

    python utils/convert_arr2img.py ./result/50salads/dataset-50salads_split1/predictions

License

This repository is released under the MIT License.

Citation

@inproceedings{chinayi_ASformer,
author={Fangqiu Yi and Hongyu Wen and Tingting Jiang}, booktitle={The British Machine Vision Conference (BMVC)},
title={ASFormer: Transformer for Action Segmentation}, year={2021},
}

Reference

  • Yuchi Ishikawa, Seito Kasai, Yoshimitsu Aoki, Hirokatsu Kataoka, "Alleviating Over-segmentation Errors by Detecting Action Boundaries" in WACV 2021.
  • Colin Lea et al., "Temporal Convolutional Networks for Action Segmentation and Detection", in CVPR2017 (paper)
  • Yazan Abu Farha et al., "MS-TCN: Multi-Stage Temporal Convolutional Network for Action Segmentation", in CVPR2019 (paper, code)
You might also like...
DeepConsensus uses gap-aware sequence transformers to correct errors in Pacific Biosciences (PacBio) Circular Consensus Sequencing (CCS) data.
DeepConsensus uses gap-aware sequence transformers to correct errors in Pacific Biosciences (PacBio) Circular Consensus Sequencing (CCS) data.

DeepConsensus DeepConsensus uses gap-aware sequence transformers to correct errors in Pacific Biosciences (PacBio) Circular Consensus Sequencing (CCS)

Action Segmentation Evaluation

Reference Action Segmentation Evaluation Code This repository contains the reference code for action segmentation evaluation. If you have a bug-fix/im

Official repo for BMVC2021 paper ASFormer: Transformer for Action Segmentation

ASFormer: Transformer for Action Segmentation This repo provides training & inference code for BMVC 2021 paper: ASFormer: Transformer for Action Segme

Efficient Two-Step Networks for Temporal Action Segmentation (Neurocomputing 2021)

Efficient Two-Step Networks for Temporal Action Segmentation This repository provides a PyTorch implementation of the paper Efficient Two-Step Network

A DeepStack custom model for detecting common objects in dark/night images and videos.
A DeepStack custom model for detecting common objects in dark/night images and videos.

DeepStack_ExDark This repository provides a custom DeepStack model that has been trained and can be used for creating a new object detection API for d

A custom DeepStack model for detecting 16 human actions.
A custom DeepStack model for detecting 16 human actions.

DeepStack_ActionNET This repository provides a custom DeepStack model that has been trained and can be used for creating a new object detection API fo

Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label.
Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label.

Tensorflow-Mobile-Generic-Object-Localizer Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label. Ori

Python TFLite scripts for detecting objects of any class in an image without knowing their label.
Python TFLite scripts for detecting objects of any class in an image without knowing their label.

Python TFLite scripts for detecting objects of any class in an image without knowing their label.

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples This project is for the paper "Training Confidence-Calibrated Clas

Comments
  • regarding  hierarchical representation pattern

    regarding hierarchical representation pattern

    In the paper it mentioned that

    To demonstrate its effectiveness, we conduct a non-hierarchical version by setting the size of the local window in all attention layers to 512, which is the largest window size in the last encoder/decoder block (out of memory if we directly set the size of the local window in all attention layers to the video length). I was not able to figure out where in the code the attention layer has windows of 512. can you please point me to the right direction?

    opened by seyeeet 4
Owner
null
STEAL - Learning Semantic Boundaries from Noisy Annotations (CVPR 2019)

STEAL This is the official inference code for: Devil Is in the Edges: Learning Semantic Boundaries from Noisy Annotations David Acuna, Amlan Kar, Sanj

null 469 Dec 26, 2022
Code for sound field predictions in domains with impedance boundaries. Used for generating results from the paper

Code for sound field predictions in domains with impedance boundaries. Used for generating results from the paper

DTU Acoustic Technology Group 11 Dec 17, 2022
This project uses Template Matching technique for object detecting by detection of template image over base image.

Object Detection Project Using OpenCV This project uses Template Matching technique for object detecting by detection the template image over base ima

Pratham Bhatnagar 7 May 29, 2022
This project uses Template Matching technique for object detecting by detection of template image over base image

Object Detection Project Using OpenCV This project uses Template Matching technique for object detecting by detection the template image over base ima

Pratham Bhatnagar 4 Nov 16, 2021
Allows including an action inside another action (by preprocessing the Yaml file). This is how composite actions should have worked.

actions-includes Allows including an action inside another action (by preprocessing the Yaml file). Instead of using uses or run in your action step,

Tim Ansell 70 Nov 4, 2022
Official implementation of ACTION-Net: Multipath Excitation for Action Recognition (CVPR'21).

ACTION-Net Official implementation of ACTION-Net: Multipath Excitation for Action Recognition (CVPR'21). Getting Started EgoGesture data folder struct

V-Sense 171 Dec 26, 2022
Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal Action Localization' (ICCV-21 Oral)

Learning-Action-Completeness-from-Points Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal A

Pilhyeon Lee 67 Jan 3, 2023
Human Action Controller - A human action controller running on different platforms.

Human Action Controller (HAC) Goal A human action controller running on different platforms. Fun Easy-to-use Accurate Anywhere Fun Examples Mouse Cont

null 27 Jul 20, 2022
The official TensorFlow implementation of the paper Action Transformer: A Self-Attention Model for Short-Time Pose-Based Human Action Recognition

Action Transformer A Self-Attention Model for Short-Time Human Action Recognition This repository contains the official TensorFlow implementation of t

PIC4SeRCentre 20 Jan 3, 2023
Delving into Localization Errors for Monocular 3D Object Detection, CVPR'2021

Delving into Localization Errors for Monocular 3D Detection By Xinzhu Ma, Yinmin Zhang, Dan Xu, Dongzhan Zhou, Shuai Yi, Haojie Li, Wanli Ouyang. Intr

XINZHU.MA 124 Jan 4, 2023