Generic Event Boundary Detection: A Benchmark for Event Segmentation

Related tags

Deep Learning GEBD
Overview

Generic Event Boundary Detection: A Benchmark for Event Segmentation

We release our data annotation & baseline codes for detecting generic event boundaries in video.

Links: [Arxiv] [LOVEU Challenge]

Contributors: Mike Zheng Shou, Stan Lei, Deepti Ghadiyaram, Weiyao Wang, Matt Feiszli.

Overview

This repo has the following structure:

./
│   LICENSE
│   README.md
│   INSTRUCTIONS.md
│
├───BdyDet
│   ├───k400
│   │       detect_event_boundary.py
│   │       run_multiprocess_detect_event_boundary.py
│   │
│   └───TAPOS
│           detect_event_boundary.py
│           run_multiprocess_detect_event_boundary.py
│
├───Challenge_eval_Code
│       eval.py
│       README.md
│
├───data
│   ├───export
│   │       prepare_hmdb_release.ipynb
│   │       prepare_k400_release.ipynb
│   │
│   ├───exp_k400
│   │   │   classInd.txt
│   │   │   val_set.csv
│   │   │
│   │   ├───detect_seg
│   │   └───pred_err
│   └───exp_TAPOS
│       │   train_set.csv
│       │   val_set.csv
│       │
│       ├───detect_seg
│       └───pred_err
├───eval
│       eval_GEBD_k400.ipynb
│       eval_GEBD_TAPOS.ipynb
│
├───PA_DPC
│   │   LICENSE
│   │   README.md
│   │
│   ├───asset
│   │       arch.png
│   │
│   ├───backbone
│   │       convrnn.py
│   │       resnet_2d.py
│   │       resnet_2d3d.py
│   │       select_backbone.py
│   │
│   ├───dpc
│   │       dataset_3d_infer_pred_error.py
│   │       main_infer_pred_error.py
│   │       model_3d.py
│   │
│   └───utils
│           augmentation.py
│           utils.py
│
└───PC
    │   PC_test.py
    │   PC_train.py
    │   README.md
    │
    ├───DataAssets
    ├───datasets
    │       augmentation.py
    │       MultiFDataset.py
    │
    ├───modeling
    │       resnetGEBD.py
    │
    ├───run
    │       pc_k400_dist.sh
    │       pc_tapos_dist.sh
    │
    └───utils
            augmentation.py
            checkpoint_saver.py
            augmentation.py
            checkpoint_saver.py
            clip_grad.py
            cuda.py
            getter.py
            helper.py
            log.py
            metric.py
            model_ema.py
            optim_factory.py
            sampler.py
            scheduler.py

Note that we release codes on Github. Annotations are available on GoogleDrive. Run the code by yourself to generate the output files. Refer to INSTRUCTIONS for preparing data and generating submission files.

  • data/:

    • export/ folder stores temporal boundary annotations of our Kinetics-GEBD and HMDB-GEBD datasets; download our raw annotations and put them under this folder.
    • exp_k400/ and exp_TAPOS/ store intermediate experimental data and final results.
  • Challenge_eval_Code/: codes for evaluation in LOVEU Challenge Track 1.

  • BdyDet/: codes for detecting boundary positions based on predictability sequence.

  • eval/: codes for evaluating the performance of boundary detection.

  • PA_DPC/: codes for computing the predictability sequence using various methods.

  • PC/: codes for supervised baseline on GEBD.

data/

  • In data/export/:

    • *_raw_annotation.pkl stores the raw annotations; download raw annotations here.
    • we further filter out videos that receives <3 annotations and conduct pre-processing e.g. merge very close boundaries - we use notebook prepare_*_release.ipynb and the output is stored in *_mr345_min_change_duration0.3.pkl.
  • Some fields in *_raw_annotation.pkl:

    • fps: frames per second.
    • video_duration:video duration in second.
    • f1_consis: a list of consistency scores, each score corresponds to a specific annotator’s score as compared to other annotators.
    • substages_timestamps: a list of annotations from each annotator; each annotator’s annotation is again a list of boundaries. time in second; for 'label', it is of format A: B.
      • If A is “ShotChangeGradualRange”, B could be “Change due to Cut”, “Change from/to slow motion”, “Change from/to fast motion”, or “N/A”.
      • If A is “ShotChangeImmediateTimestamp”, B could be “Change due to Pan”, “Change due to Zoom”, “Change due to Fade/Dissolve/Gradual”, “Multiple”, or “N/A”.
      • If A is “EventChange (Timestamp/Range)”, B could be “Change of Color”, “Change of Actor/Subject”, “Change of Object Being Interacted”, “Change of Action”, “Multiple”, or “N/A”.
  • Some fields in *_mr345_min_change_duration0.3.pkl:

    • substages_myframeidx: the term of substage is following the convention in TAPOS to refer to boundary position -- here we store each boundary’s frame index which starts at 0.
  • In data/exp_k400/ and data/exp_TAPOS/:

    • pred_err/ stores output of PA_DPC/ i.e. the predictability sequence;
    • detect_seg/ stores output of BdyDet/ i.e. detected boundary positions.

Challenge_eval_Code/

  • We use eval.py for evaluation in our competition.

  • Although one can use frame_index and number of total frames to measure the Rel.Dis, as we implemented in eval/, you should represent the detected boundaries with timestamps (in seconds). For example:

    {
    ‘6Tz5xfnFl4c’: [5.9, 9.4], # boundaries detected at 5.9s, 9.4s of this video
    ‘zJki61RMxcg’: [0.6, 1.5, 2.7] # boundaries detected at 0.6s, 1.5s, 2.7s of this video
    ...
    }
  • Refer to this file to generate GT files from raw annotations.

BdyDet/

Change to directory ./BdyDet/k400 or ./BdyDet/TAPOS and run the following command, which will launch multiple processes of detect_event_boundary.py.

(Note to set the number of processes according to your server in detect_event_boundary.py.)

python run_multiprocess_detect_event_boundary.py

PA_DPC/

Our implementation is based on the [DPC] framework. Please refer to their README or website to learn installation and usage. In the below, we only explain how to run our scripts and what are our modifications.

  • Modifications at a glance

    • main_infer_pred_error.py runs in only inference mode to assess predictability over time.
    • dataset_3d_infer_pred_error.py contains loaders for two datasets i.e. videos from Kinetics and truncated instances from TAPOS.
    • model_3d.py adds several classes for feature extraction purposes, e.g. ResNet feature before the pooling layer, to enable computing feature difference directly based on ImageNet pretrained model, in contrast to the predictive model in DPC.
  • How to run?

    change to directory ./PA_DPC/dpc/

    • Kinetics-GEBD:
    python main_infer_pred_error.py --gpu 0 --model resnet50-beforepool --num_seq 2 --pred_step 1 --seq_len 5 --dataset k400 --batch_size 160 --img_dim 224 --mode val --ds 3 --pred_task featdiff_rgb
    • TAPOS:
    python main_infer_pred_error.py --gpu 0 --model resnet50-beforepool --num_seq 2 --pred_step 1 --seq_len 5 --dataset tapos_instances --batch_size 160 --img_dim 224 --mode val --ds 3 --pred_task featdiff_rgb
    • Notes:

      • ds=3 means we downsample the video frames by 3 to reduce computation.

      • seq_len=5 means 5 frames for each step (concept used in DPC).

      • num_seq=2and pred_step=1 so that at a certain time position t, we use 1 step before and 1 step after to assess the predictability at time t; thus the predictability at time t is the feature difference between the average feature of 5 sampled frames before t and the average feature of 5 sampled frames after t.

      • we slide such model over time to obtain predictability at different temporal positions. More details can be found in dataset_3d_infer_pred_error.py. Note that window_lists.pkl stores the index of frames to be sampled for every window - since it takes a long time to compute (should be able to optimize in the future), we store its value and just load this pre-computed value in the future runs on the same dataset during experimental explorations.

PC/

PC is a supervised baseline for GEBD task. In the PC framework, for each frame f in a video, we take T frames preceding f and T frames succeeding f as inputs, and then build a binary classifier to predict if f is boundary or background.

  • Get Started

    • Check PC/datasets/MultiFDataset.py to generate GT files for training. Note that you should prepare k400_mr345_*SPLIT*_min_change_duration0.3.pkl for Kinetics-GEBD and TAPOS_*SPLIT*_anno.pkl (this should be organized as k400_mr345_*SPLIT*_min_change_duration0.3.pkl for convenience) for TAPOS before running our code.
    • You should accordingly change PATH_TO in our codes to your data/frames path as needed.
  • How to run?

    Change directory to ./PC.

    • Train on Kinetics-GEBD:

      CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 PC_train.py \
      --dataset kinetics_multiframes \
      --train-split train \
      --val-split val \
      --num-classes 2 \
      --batch-size 32 \
      --n-sample-classes 2 \
      --n-samples 16 \
      --lr 0.01 \
      --warmup-epochs 0 \
      --epochs 30 \
      --decay-epochs 10 \
      --model multiframes_resnet \
      --pin-memory \
      --balance-batch \
      --sync-bn \
      --amp \
      --native-amp \
      --eval-metric loss \
      --log-interval 50
    • Train on TAPOS:

      CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 PC_train.py \
      --dataset tapos_multiframes \
      --train-split train \
      --val-split val \
      --num-classes 2 \
      --batch-size 32 \
      --n-sample-classes 2 \
      --n-samples 16 \
      --lr 0.01 \
      --warmup-epochs 0 \
      --epochs 30 \
      --decay-epochs 10 \
      --model multiframes_resnet \
      --pin-memory \
      --balance-batch \
      --sync-bn \
      --amp \
      --native-amp \
      --eval-metric loss \
      --log-interval 50 
    • Generate scores sequence on Kinetics-GEBD Validation Set:

      CUDA_VISIBLE_DEVICES=0 python PC_test.py \ 
      --dataset kinetics_multiframes \
      --val-split val \
      --resume path_to/checkpoint
    • Generate scores sequence on TAPOS:

      CUDA_VISIBLE_DEVICES=0 python PC_test.py \ 
      --dataset tapos_multiframes \
      --val-split val \
      --resume path_to/checkpoint
  • Models

Misc

  • Download datasets from Kinetics-400 and TAPOS. (Note that some of the videos can not be downloaded from YouTube for some reason, you can go ahead with those available.)

  • Note that for TAPOS, you need to cut out each action instance by yourself first and then can use our following codes to process each instance's video separately.

  • Extract frames of videos in the dataset.

  • To reproduce PA_DPC, generate your own data/exp_*/val_set.csv, in which path to the folder of video frames and the number of frames in that specific video should be contained.

Q&A

For any questions, welcome to create an issue or email Mike ([email protected]) and Stan ([email protected]). Thank you for helping us improve our data & codes.

Comments
  • [Is it a bug?] Bug in evaluation notebook may undervalue PA

    [Is it a bug?] Bug in evaluation notebook may undervalue PA

    Hello. We found following critical bug in https://github.com/StanLei52/GEBD/blob/main/eval/eval_GEBD_k400.ipynb;

    bdy_idx_list_smt = bdy_idx_save['bdy_idx_list_smt']*downsample # already offset, index starts from 0
    

    Since bdy_idx_save['bdy_idx_list_smt'] is list, * operation just repeats elements of bdy_idx_save['bdy_idx_list_smt'] instead of multiplication. Note, this dictionary may be produced by following; https://github.com/StanLei52/GEBD/blob/fde03f8f3e8db1e487a7f37b3f1be600bbe72a4f/BdyDet/k400/detect_event_boundary.py#L208

    After we change notebook code to np.array(...) * downsample, F1 value on Kinetics-GEBD reaches 0.398 (note, 0.242 in your original paper)!

    Just in case, could you fix this bug and recheck your evaluation score?

    Thanks a lot.

    opened by KosukeMizufune 6
  • How to process the predicted scores of PC baseline

    How to process the predicted scores of PC baseline

    Hi, thanks for your code and detailed instructions! I was trying to evaluate PC baseline on GEBD-val set, and I wonder how to process the output scores to get the detected boundaries with timestamps (as mentioned in the Challenge eval code).

    Another detail question is about the threshold of scores to select which frames are boundaries. I might have missed this part in the paper because I didn't find the threshold of score there. I've tried different thresholds and they give me different avg F1. For example, using 0.5 as threshold gives 80.04, 0.53 gives 78.48, and 0.45 gives 81.79. Thanks!!

    opened by SJTUwxz 5
  • [Bug]Is it a bug in data/export/prepare_k400_release.ipynb?

    [Bug]Is it a bug in data/export/prepare_k400_release.ipynb?

    Hi, When I use data/export/prepare_k400_release.ipynb to generate GT files. I noticed that the code don't sort those ranges when dealing range overlap. Your code as follow:

    # if two shot range overlap, merge
    i = 0
    while i < len(change_shot_range_start)-1:
        while change_shot_range_end[i]>=change_shot_range_start[i+1]:
            change_shot_range_start.remove(change_shot_range_start[i+1])
            if change_shot_range_end[i]<=change_shot_range_end[i+1]:
                change_shot_range_end.remove(change_shot_range_end[i])
            else:
                change_shot_range_end.remove(change_shot_range_end[i+1])
            if i==len(change_shot_range_start)-1:
                break
        i+=1 
    

    When the order of ranges is not correct, it may discard some ranges as the example followed: The first annotation of video '1-XcXraVU84_000004_000014.mp4' is: [{'start_time': 0.02397, 'end_time': 1.11045, 'label': 'ShotChangeGradualRange: Change due to Fade/Dissolve/Gradual'}, {'start_time': 1.12643, 'end_time': 2.33204, 'label': 'ShotChangeGradualRange: Change due to Pan'}, {'start_time': 2.74816, 'end_time': 4.0267, 'label': 'ShotChangeGradualRange: Change due to Fade/Dissolve/Gradual'}, {'start_time': 7.5175, 'end_time': 9.28918, 'label': 'ShotChangeGradualRange: Change due to Pan'}, {'start_time': 5.37253, 'end_time': 6.4219, 'label': 'ShotChangeGradualRange: Change due to Pan'}, {'start_time': 6.52839, 'end_time': 7.42305, 'label': 'ShotChangeGradualRange: Change due to Pan'}] The GT file for this video is: [0.56721, 1.729235, 3.38743, 8.40334] The last two ranges are discarded by mistake.

    opened by ScorpioRHe 5
  • about your data and code

    about your data and code

    Hello, is there any difference between your code and BMN series? According to my understanding, this task should be the same label format as activitynet1.3. In addition, your dataset is 32x of ActivityNet, can you provide a feature file extracted by GEBD for us to use? Thank you

    opened by NEUdeep 2
  • Has the bug of the preprocessing code been fixed and reflected in the challenge?

    Has the bug of the preprocessing code been fixed and reflected in the challenge?

    There was a bug in the code for preprocessing the annotations (https://github.com/StanLei52/GEBD/issues/3). I wonder it if has been fixed and reflected in the challenge.

    opened by kylemin 1
  • Annotations for TAPOS Dataset

    Annotations for TAPOS Dataset

    Hi @StanLei52 ,

    I am trying to evaluate the code on TAPOS dataset.

    Could you please provide the raw annotation for TAPOS and the Jupyter Notebook to process it as provided for HMDB and Kinetics-GEBD? https://github.com/StanLei52/GEBD/tree/main/data/export

    opened by rayush7 1
Owner
null
Code for Boundary-Aware Segmentation Network for Mobile and Web Applications

BASNet Boundary-Aware Segmentation Network for Mobile and Web Applications This repository contain implementation of BASNet in tensorflow/keras. comme

Hamid Ali 8 Nov 24, 2022
[AAAI-2021] Visual Boundary Knowledge Translation for Foreground Segmentation

Trans-Net Code for (Visual Boundary Knowledge Translation for Foreground Segmentation, AAAI2021). [https://ojs.aaai.org/index.php/AAAI/article/view/16

ZJU-VIPA 2 Mar 4, 2022
An official PyTorch Implementation of Boundary-aware Self-supervised Learning for Video Scene Segmentation (BaSSL)

An official PyTorch Implementation of Boundary-aware Self-supervised Learning for Video Scene Segmentation (BaSSL)

Kakao Brain 72 Dec 28, 2022
TransNet V2: Shot Boundary Detection Neural Network

TransNet V2: Shot Boundary Detection Neural Network This repository contains code for TransNet V2: An effective deep network architecture for fast sho

Tomáš Souček 212 Dec 27, 2022
A public available dataset for road boundary detection in aerial images

Topo-boundary This is the official github repo of paper Topo-boundary: A Benchmark Dataset on Topological Road-boundary Detection Using Aerial Images

Zhenhua Xu 79 Jan 4, 2023
BADet: Boundary-Aware 3D Object Detection from Point Clouds (Pattern Recognition 2022)

BADet: Boundary-Aware 3D Object Detection from Point Clouds (Pattern Recognition

Rui Qian 17 Dec 12, 2022
Scikit-event-correlation - Event Correlation and Forecasting over High Dimensional Streaming Sensor Data algorithms

scikit-event-correlation Event Correlation and Changing Detection Algorithm Theo

Intellia ICT 5 Oct 30, 2022
Event-forecasting - Event Forecasting Algorithms With Python

event-forecasting Event Forecasting Algorithms Theory Correlating events in comp

Intellia ICT 4 Feb 15, 2022
Event sourced bank - A wide-and-shallow example using the Python event sourcing library

Event Sourced Bank A "wide but shallow" example of using the Python event sourci

null 3 Mar 9, 2022
Boundary IoU API (Beta version)

Boundary IoU API (Beta version) Bowen Cheng, Ross Girshick, Piotr Dollár, Alexander C. Berg, Alexander Kirillov [arXiv] [Project] [BibTeX] This API is

Bowen Cheng 177 Dec 29, 2022
Code for CVPR2021 paper "Learning Salient Boundary Feature for Anchor-free Temporal Action Localization"

AFSD: Learning Salient Boundary Feature for Anchor-free Temporal Action Localization This is an official implementation in PyTorch of AFSD. Our paper

Tencent YouTu Research 146 Dec 24, 2022
Out-of-boundary View Synthesis towards Full-frame Video Stabilization

Out-of-boundary View Synthesis towards Full-frame Video Stabilization Introduction | Update | Results Demo | Introduction This repository contains the

null 25 Oct 10, 2022
A pytorch-version implementation codes of paper: "BSN++: Complementary Boundary Regressor with Scale-Balanced Relation Modeling for Temporal Action Proposal Generation"

BSN++: Complementary Boundary Regressor with Scale-Balanced Relation Modeling for Temporal Action Proposal Generation A pytorch-version implementation

null 11 Oct 8, 2022
BMN: Boundary-Matching Network

BMN: Boundary-Matching Network A pytorch-version implementation codes of paper: "BMN: Boundary-Matching Network for Temporal Action Proposal Generatio

qinxin 260 Dec 6, 2022
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples

Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples This repository is the official implementation of paper [Qimera: Data-free Q

Kanghyun Choi 21 Nov 3, 2022
Boundary-preserving Mask R-CNN (ECCV 2020)

BMaskR-CNN This code is developed on Detectron2 Boundary-preserving Mask R-CNN ECCV 2020 Tianheng Cheng, Xinggang Wang, Lichao Huang, Wenyu Liu Video

Hust Visual Learning Team 178 Nov 28, 2022
A Weakly Supervised Amodal Segmenter with Boundary Uncertainty Estimation

Paper Khoi Nguyen, Sinisa Todorovic "A Weakly Supervised Amodal Segmenter with Boundary Uncertainty Estimation", accepted to ICCV 2021 Our code is mai

Khoi Nguyen 5 Aug 14, 2022
Finite difference solution of 2D Poisson equation. Can handle Dirichlet, Neumann and mixed boundary conditions.

Poisson-solver-2D Finite difference solution of 2D Poisson equation Current version can handle Dirichlet, Neumann, and mixed (combination of Dirichlet

Mohammad Asif Zaman 34 Dec 23, 2022