f-BRS: Rethinking Backpropagating Refinement for Interactive Segmentation

Overview

f-BRS: Rethinking Backpropagating Refinement for Interactive Segmentation [Paper] [PyTorch] [MXNet] [Video]

This repository provides code for training and testing state-of-the-art models for interactive segmentation with the official PyTorch implementation of the following paper:

f-BRS: Rethinking Backpropagating Refinement for Interactive Segmentation
Konstantin Sofiiuk, Ilia Petrov, Olga Barinova, Anton Konushin
Samsung AI Center Moscow
https://arxiv.org/abs/2001.10331

Please see the video below explaining how our algorithm works:

drawing

We also have full MXNet implementation of our algorithm, you can check mxnet branch.

News

Setting up an environment

This framework is built using Python 3.6 and relies on the PyTorch 1.4.0+. The following command installs all necessary packages:

pip3 install -r requirements.txt

You can also use our Dockerfile to build a container with configured environment.

If you want to run training or testing, you must configure the paths to the datasets in config.yml (SBD for training and testing, GrabCut, Berkeley, DAVIS and COCO_MVal for testing only).

Interactive Segmentation Demo

drawing

The GUI is based on TkInter library and it's Python bindings. You can try an interactive demo with any of provided models (see section below). Our scripts automatically detect the architecture of the loaded model, just specify the path to the corresponding checkpoint.

Examples of the script usage:

# This command runs interactive demo with ResNet-34 model from cfg.INTERACTIVE_MODELS_PATH on GPU with id=0
# --checkpoint can be relative to cfg.INTERACTIVE_MODELS_PATH or absolute path to the checkpoint
python3 demo.py --checkpoint=resnet34_dh128_sbd --gpu=0

# This command runs interactive demo with ResNet-34 model from /home/demo/fBRS/weights/
# If you also do not have a lot of GPU memory, you can reduce --limit-longest-size (default=800)
python3 demo.py --checkpoint=/home/demo/fBRS/weights/resnet34_dh128_sbd --limit-longest-size=400

# You can try the demo in CPU only mode
python3 demo.py --checkpoint=resnet34_dh128_sbd --cpu

You can also use the docker image to run the demo. For this you need to activate X-host connection and then run the container with some additional flags:

\ python3 demo.py --checkpoint resnet34_dh128_sbd --cpu">
# activate xhost
xhost +

docker run -v "$PWD":/tmp/ \
           -v /tmp/.X11-unix:/tmp/.X11-unix \
           -e DISPLAY=$DISPLAY <id-or-tag-docker-built-image> \
           python3 demo.py --checkpoint resnet34_dh128_sbd --cpu

drawing

Controls:

  • press left and right mouse buttons for positive and negative clicks, respectively;
  • scroll wheel to zoom in and out;
  • hold right mouse button and drag to move around an image (you can also use arrows and WASD);
  • press space to finish the current object;
  • when multiple files are open, pressing the left arrow key displays the previous image, and pressing the right arrow key displays the next image;
  • use Ctrl+S to save the annotation you're currently editing ("original file name".png).

Interactive segmentation options:

  • ZoomIn (can be turned on/off using the checkbox)
    • Skip clicks - the number of clicks to skip before using ZoomIn.
    • Target size - ZoomIn crop is resized so its longer side matches this value (increase for large objects).
    • Expand ratio - object bbox is rescaled with this ratio before crop.
  • BRS parameters (BRS type can be changed using the dropdown menu)
    • Network clicks - the number of first clicks that are included in the network's input. Subsequent clicks are processed only using BRS (NoBRS ignores this option).
    • L-BFGS-B max iterations - the maximum number of function evaluation for each step of optimization in BRS (increase for better accuracy and longer computational time for each click).
  • Visualisation parameters
    • Prediction threshold slider adjusts the threshold for binarization of probability map for the current object.
    • Alpha blending coefficient slider adjusts the intensity of all predicted masks.
    • Visualisation click radius slider adjusts the size of red and green dots depicting clicks.

drawing

Datasets

We train all our models on SBD dataset and evaluate them on GrabCut, Berkeley, DAVIS, SBD and COCO_MVal datasets. We additionally provide the results of models that trained on combination of COCO and LVIS datasets.

Berkeley dataset consists of 100 instances (96 unique images) provided by [K. McGuinness, 2010]. We use the same 345 images from DAVIS dataset for testing as [WD Jang, 2019], ground-truth mask for each image is a union of all objects' masks. For testing on SBD dataset we evaluate our algorithm on every instance in the test set separately following the protocol of [WD Jang, 2019].

To construct COCO_MVal dataset we sample 800 object instances from the validation set of COCO 2017. Specifically, we sample 10 unique instances from each of the 80 categories. The only exception is the toaster object class, which has only 9 unique instances in instances_val2017 annotation. So to get 800 masks one of the classes contains 11 objects. We provide this dataset for downloading so that everyone can reproduce our results.

Dataset Description Download Link
SBD 8498 images with 20172 instances for training and
2857 images with 6671 instances for testing
official site
Grab Cut 50 images with one object each GrabCut.zip (11 MB)
Berkeley 96 images with 100 instances Berkeley.zip (7 MB)
DAVIS 345 images with one object each DAVIS.zip (43 MB)
COCO_MVal 800 images with 800 instances COCO_MVal.zip (127 MB)

Don't forget to change the paths to the datasets in config.yml after downloading and unpacking.

Testing

Pretrained models

We provide pretrained models with different backbones for interactive segmentation. The evaluation results are different from the ones presented in our paper, because we have retrained all models on the new codebase presented in this repository. We greatly accelerated the inference of the RGB-BRS algorithm - now it works from 2.5 to 4 times faster on SBD dataset compared to the timings given in the paper. Nevertheless, the new results sometimes are even better.

Note that all ResNet models were trained using MXNet branch and then converted to PyTorch (they have equivalent results). We provide the script that was used to convert the models. HRNet models were trained using PyTorch.

You can find model weights and test results in the tables below:

Backbone Train Dataset Link
ResNet-34 SBD resnet34_dh128_sbd.pth (GitHub, 89 MB)
ResNet-50 SBD resnet50_dh128_sbd.pth (GitHub, 120 MB)
ResNet-101 SBD resnet101_dh256_sbd.pth (GitHub, 223 MB)
HRNetV2-W18+OCR SBD hrnet18_ocr64_sbd.pth (GitHub, 39 MB)
HRNetV2-W32+OCR SBD hrnet32_ocr128_sbd.pth (GitHub, 119 MB)
ResNet-50 COCO+LVIS resnet50_dh128_lvis.pth (GitHub, 120 MB)
HRNetV2-W32+OCR COCO+LVIS hrnet32_ocr128_lvis.pth (GitHub, 119 MB)
Model BRS
Type
GrabCut Berkeley DAVIS SBD COCO_MVal
NoC
85%
NoC
90%
NoC
85%
NoC
90%
NoC
85%
NoC
90%
NoC
85%
NoC
90%
NoC
85%
NoC
90%
ResNet-34
(SBD)
RGB-BRS 2.04 2.50 2.22 4.49 5.34 7.91 4.19 6.83 4.16 5.52
f-BRS-B 2.06 2.48 2.40 4.17 5.34 7.73 4.47 7.28 4.31 5.79
ResNet-50
(SBD)
RGB-BRS 2.16 2.56 2.17 4.27 5.27 7.51 4.00 6.59 4.12 5.61
f-BRS-B 2.20 2.64 2.17 4.22 5.44 7.81 4.55 7.45 4.31 6.26
ResNet-101
(SBD)
RGB-BRS 2.10 2.46 2.34 3.91 5.19 7.23 3.78 6.28 3.98 5.45
f-BRS-B 2.30 2.68 2.61 4.22 5.32 7.35 4.20 7.10 4.11 5.91
HRNet-W18+OCR
(SBD)
RGB-BRS 1.68 1.94 1.99 3.81 5.49 7.98 4.19 6.84 3.62 5.04
f-BRS-B 1.86 2.18 2.07 3.96 5.62 8.08 4.70 7.65 3.87 5.57
HRNet-W32+OCR
(SBD)
RGB-BRS 1.80 2.16 2.00 3.58 5.40 7.59 3.87 6.33 3.61 5.12
f-BRS-B 1.78 2.16 2.13 3.69 5.54 7.62 4.31 7.08 3.82 5.44

ResNet-50
(COCO+LVIS)
RGB-BRS 1.54 1.76 1.56 2.70 4.93 6.22 4.04 6.85 2.41 3.47
f-BRS-B 1.52 1.74 1.56 2.61 4.94 6.36 4.29 7.20 2.34 3.43
HRNet-W32+OCR
(COCO+LVIS)
RGB-BRS 1.54 1.60 1.63 2.59 5.06 6.34 4.18 6.96 2.38 3.55
f-BRS-B 1.54 1.69 1.64 2.44 5.17 6.50 4.37 7.26 2.35 3.44

Evaluation

We provide the script to test all the presented models in all possible configurations on GrabCut, Berkeley, DAVIS, COCO_MVal and SBD datasets. To test a model, you should download its weights and put it in ./weights folder (you can change this path in the config.yml, see INTERACTIVE_MODELS_PATH variable). To test any of our models, just specify the path to the corresponding checkpoint. Our scripts automatically detect the architecture of the loaded model.

The following command runs the model evaluation (other options are displayed using '-h'):

python3 scripts/evaluate_model.py <brs-mode> --checkpoint=<checkpoint-name>

Examples of the script usage:

# This command evaluates ResNet-34 model in f-BRS-B mode on all Datasets.
python3 scripts/evaluate_model.py f-BRS-B --checkpoint=resnet34_dh128_sbd

# This command evaluates HRNetV2-W32+OCR model in f-BRS-B mode on all Datasets.
python3 scripts/evaluate_model.py f-BRS-B --checkpoint=hrnet32_ocr128_sbd

# This command evaluates ResNet-50 model in RGB-BRS mode on GrabCut and Berkeley datasets.
python3 scripts/evaluate_model.py RGB-BRS --checkpoint=resnet50_dh128_sbd --datasets=GrabCut,Berkeley

# This command evaluates ResNet-101 model in DistMap-BRS mode on DAVIS dataset.
python3 scripts/evaluate_model.py DistMap-BRS --checkpoint=resnet101_dh256_sbd --datasets=DAVIS

Jupyter notebook

Open In Colab

You can also interactively experiment with our models using test_any_model.ipynb Jupyter notebook.

Training

We provide the scripts for training our models on SBD dataset. You can start training with the following commands:

# ResNet-34 model
python3 train.py models/sbd/r34_dh128.py --gpus=0,1 --workers=4 --exp-name=first-try

# ResNet-50 model
python3 train.py models/sbd/r50_dh128.py --gpus=0,1 --workers=4 --exp-name=first-try

# ResNet-101 model
python3 train.py models/sbd/r101_dh256.py --gpus=0,1,2,3 --workers=6 --exp-name=first-try

# HRNetV2-W32+OCR model
python3 train.py models/sbd/hrnet32_ocr128.py --gpus=0,1 --workers=4 --exp-name=first-try

For each experiment, a separate folder is created in the ./experiments with Tensorboard logs, text logs, visualization and model's checkpoints. You can specify another path in the config.yml (see EXPS_PATH variable).

Please note that we have trained ResNet-34 and ResNet-50 models on 2 GPUs, ResNet-101 on 4 GPUs (we used Nvidia Tesla P40 for training). If you are going to train models in different GPUs configuration, you will need to set a different batch size. You can specify batch size using the command line argument --batch-size or change the default value in model script.

We used pre-trained HRNetV2 models from the official repository. If you want to train interactive segmentation with these models, you need to download weights and specify the paths to them in config.yml.

License

The code is released under the MPL 2.0 License. MPL is a copyleft license that is easy to comply with. You must make the source code for any of your changes available under MPL, but you can combine the MPL software with proprietary code, as long as you keep the MPL code in separate files.

Citation

If you find this work is useful for your research, please cite our paper:

@inproceedings{fbrs2020,
   title={f-brs: Rethinking backpropagating refinement for interactive segmentation},
   author={Sofiiuk, Konstantin and Petrov, Ilia and Barinova, Olga and Konushin, Anton},
   booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
   pages={8623--8632},
   year={2020}
}
Comments
  • What's the training clicking strategy?

    What's the training clicking strategy?

    In paper Interactive image segmentation via backpropagating refinement scheme, the user-annotations are imitated through a clustering strategy when they trained on the sbd dataset. I'm wondering if you have applied the same way to generate clicks when training?

    opened by sdfghjkyuio 14
  • About  NoC@85% and NoC@90% differences on ResNet-34 (SBD)

    About NoC@85% and NoC@90% differences on ResNet-34 (SBD)

    I retrain the ResNet-34 model on SBD dataset and the performance in the f-BRS-B mode differs a lot from the official version. Are there some places I should notice? I set batch_size = 26 (a GPU 1080ti 12G, win10), and 120 epochs are used. My results are follows, where NoC@90% outcomes on Berkeley and DAVIS deviate much.

    NoC@85%/NoC@90%
    GrabCut: 1.94/2.50
    Berkeley: 2.40/5.22
    DAVIS: 5.40/8.23
    
    opened by Shaosifan 7
  • Performance of resnet101

    Performance of resnet101

    Hi, authors, thanks to your nice paper and code!

    Recently, I retrain the resnet101 model on your code. But my result is not as good as reported in the paper. I have read the issue but did not find any helpful information.

    My environment: Ubuntu16.04, CUDA10.1, pytorch1.3.0, four TITAN XP GPU

    My results(NoBRS, last checkpoints, NFL):

    image

    results after f-BRS-B: image

    And, the training curve is strange: image

    image

    The training loss is growing or constant(change slightly). Do you have any idea?

    Thanks.

    opened by kleinzcy 4
  • Error happens with worker>=1 in training phase

    Error happens with worker>=1 in training phase

    when I run train.py with worker=4, there is a error happen. Any body knows it?

    Traceback (most recent call last): File "F:/research/codes/Others-projects/fbrs_interactive_segmentation-master/train.py", line 69, in main() File "F:/research/codes/Others-projects/fbrs_interactive_segmentation-master/train.py", line 16, in main model_script.main(cfg) File "models/sbd/r34_dh128.py", line 24, in main train(model, cfg, model_cfg, start_epoch=cfg.start_epoch) File "models/sbd/r34_dh128.py", line 132, in train trainer.training(epoch) File "F:\research\codes\Others-projects\fbrs_interactive_segmentation-master\isegm\engine\trainer.py", line 119, in training for i, batch_data in enumerate(tbar): File "C:\Users\ll\Anaconda3\envs\fbrs\lib\site-packages\tqdm\std.py", line 1129, in iter for obj in iterable: File "C:\Users\ll\Anaconda3\envs\fbrs\lib\site-packages\torch\utils\data\dataloader.py", line 279, in iter return _MultiProcessingDataLoaderIter(self) File "C:\Users\ll\Anaconda3\envs\fbrs\lib\site-packages\torch\utils\data\dataloader.py", line 719, in init w.start() File "C:\Users\ll\Anaconda3\envs\fbrs\lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self) File "C:\Users\ll\Anaconda3\envs\fbrs\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\ll\Anaconda3\envs\fbrs\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\Users\ll\Anaconda3\envs\fbrs\lib\multiprocessing\popen_spawn_win32.py", line 89, in init reduction.dump(process_obj, to_child) File "C:\Users\ll\Anaconda3\envs\fbrs\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'train..scale_func'

    opened by Shaosifan 4
  • fbrs_interactive_segmentation.ipynb

    fbrs_interactive_segmentation.ipynb

    Your colab doesn't work on its own and I changed it. https://colab.research.google.com/gist/AlexRMU/1cedcc342e8cdec7bc8881007db77f4a/fbrs_interactive_segmentation.ipynb#scrollTo=7jDwxuLkHkRy And received float division by zero

    opened by AlexRMU 4
  • ZoomIn seems to cut off mask

    ZoomIn seems to cut off mask

    In the paper, it was stated that ZoomIn draws a rectangle around the mask produced after a certain number of clicks. However, it seems to abruptly cut off the mask that would otherwise have continued. Is this expected behaviour? Thanks.

    screenshot

    opened by nathantew14 3
  • training config on COCO + LVIS

    training config on COCO + LVIS

    Hello, could you please release your training config for COCO + LVIS, I trained on coco+lvis but got much worse performance than yours. I directly concated the two dataset with torch.utils.data.ConcatDataset. And trained the model for 12 epoches, the @80 @85 is even worse than the model trained on SBD.

    opened by XavierCHEN34 3
  • could be used as an annotation tool for instance segmentation?

    could be used as an annotation tool for instance segmentation?

    great work. It could be used as an annotation tool for instance segmentation, since the fbrs greatly speeds up the interactive segmentation? If so, it would be very useful. Thanks

    opened by AllenDun 3
  • Assertion Error while running the demo

    Assertion Error while running the demo

    Traceback (most recent call last): File "demo.py", line 59, in main() File "demo.py", line 15, in main checkpoint_path = utils.find_checkpoint(cfg.INTERACTIVE_MODELS_PATH, args.checkpoint) File "/home/tshaark/fbrs_interactive_segmentation/isegm/inference/utils.py", line 173, in find_checkpoint assert len(model_checkpoints) == 1 AssertionError

    I get this error when I try to run the demo and a similar one when I try to train. Where am I going wrong?

    opened by tshaark 3
  • Could you share the training code on

    Could you share the training code on "COCO+LVIS" dataset?

    It seems the performance of the model trained on "COCO+LVIS" dataset is much better than that on "SBD". Just wonder if you could share the training code on "COCO+LVIS" dataset? Many thanks.

    opened by JessLittle-art 1
  • Any reason behind to set crop_size to (320, 480)?

    Any reason behind to set crop_size to (320, 480)?

    Dear authors,

    Thank your for sharing your great work. I want to train your method on my own dataset. The image resolution in my dataset is 1280 * 760. In the line 29 of r34_dh128.py, model_cfg.crop_size = (320, 480). What is the reason behind to set crop size to (320, 480)? Do I need to set to a bigger window since the image resolution is bigger? Many thanks!

    opened by MaitaYuki 1
  • F-BRS

    F-BRS

    Hello , I'm very appreciate for your work. But when I read the code, I have some question: The model f-brs-a(b,c) is in evaluation stage,and I can't find it in train stage. Could you tell me if my understanding is right? Thank you for your reply.

    opened by ty199931 0
  • Resolution of image

    Resolution of image

    Hi! Thanks for sharing amazing work. I have noticed that while using pre-trained model, it is not working as fine as your demo file works on high resolution images. But for normal resolution images it works as equally as shared demo file. Can you please help me.

    opened by iresusharma 0
  • Pretrained Model

    Pretrained Model

    Hi, thank your for sharing your great work. If I want to to use pretrained models in your googledrive link, I need touse convert_weights_mx2pt.py to convert it to pytorch firtst? or I misunderstand your instructions in readme

    opened by Looottch 0
  • training hrnet32_ocr128

    training hrnet32_ocr128

    if we want to train this model(hrnet32_ocr128) and achieve excellent result like yours, can we achieve this by run 'python3 train.py models/sbd/hrnet32_ocr128.py --gpus=0,1 --workers=4 --exp-name=first-try' without changing any hyper parameters?

    opened by haoyuying 0
Owner
Visual Understanding Lab @ Samsung AI Center Moscow
Visual Understanding Lab @ Samsung AI Center Moscow
code for `Look Closer to Segment Better: Boundary Patch Refinement for Instance Segmentation`

Look Closer to Segment Better: Boundary Patch Refinement for Instance Segmentation (CVPR 2021) Introduction PBR is a conceptually simple yet effective

H.Chen 143 Jan 5, 2023
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Segmentation Transformer Implementation of Segmentation Transformer in PyTorch, a new model to achieve SOTA in semantic segmentation while using trans

Abhay Gupta 161 Dec 8, 2022
Implementation of SETR model, Original paper: Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.

SETR - Pytorch Since the original paper (Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.) has no official

zhaohu xing 112 Dec 16, 2022
[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Fudan Zhang Vision Group 897 Jan 5, 2023
Rethinking the U-Net architecture for multimodal biomedical image segmentation

MultiResUNet Rethinking the U-Net architecture for multimodal biomedical image segmentation This repository contains the original implementation of "M

Nabil Ibtehaz 308 Jan 5, 2023
Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation

STCN Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang [a

Rex Cheng 456 Dec 12, 2022
UnpNet - Rethinking 3-D LiDAR Point Cloud Segmentation(IEEE TNNLS)

UnpNet Citation Please cite the following paper if you use this repository in your reseach. @article {PMID:34914599, Title = {Rethinking 3-D LiDAR Po

Shijie Li 4 Jul 15, 2022
CVPR2022 (Oral) - Rethinking Semantic Segmentation: A Prototype View

Rethinking Semantic Segmentation: A Prototype View Rethinking Semantic Segmentation: A Prototype View, Tianfei Zhou, Wenguan Wang, Ender Konukoglu and

Tianfei Zhou 239 Dec 26, 2022
Official implementation for (Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation, CVPR-2021)

FRSKD Official implementation for Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation (CVPR-2021) Requirements Pytho

null 75 Dec 28, 2022
Official repository for "PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation"

pair-emnlp2020 Official repository for the paper: Xinyu Hua and Lu Wang: PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long

Xinyu Hua 31 Oct 13, 2022
Official Implementation for "ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement" https://arxiv.org/abs/2104.02699

ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement Recently, the power of unconditional image synthesis has significantly advanced th

null 967 Jan 4, 2023
PyTorch 1.5 implementation for paper DECOR-GAN: 3D Shape Detailization by Conditional Refinement.

DECOR-GAN PyTorch 1.5 implementation for paper DECOR-GAN: 3D Shape Detailization by Conditional Refinement, Zhiqin Chen, Vladimir G. Kim, Matthew Fish

Zhiqin Chen 72 Dec 31, 2022
House-GAN++: Generative Adversarial Layout Refinement Network towards Intelligent Computational Agent for Professional Architects

House-GAN++ Code and instructions for our paper: House-GAN++: Generative Adversarial Layout Refinement Network towards Intelligent Computational Agent

null 122 Dec 28, 2022
Paddle implementation for "Cross-Lingual Word Embedding Refinement by ℓ1 Norm Optimisation" (NAACL 2021)

L1-Refinement Paddle implementation for "Cross-Lingual Word Embedding Refinement by ℓ1 Norm Optimisation" (NAACL 2021) ?? A more detailed readme is co

Lincedo Lab 4 Jun 9, 2021
PyTorch Implementation of Google Brain's WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis

WaveGrad2 - PyTorch Implementation PyTorch Implementation of Google Brain's WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis. Status (202

Keon Lee 59 Dec 6, 2022
Pytorch implementation of “Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement”

Graph-to-Graph Transformers Self-attention models, such as Transformer, have been hugely successful in a wide range of natural language processing (NL

Idiap Research Institute 40 Aug 14, 2022
Implementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorch module

Invariant Point Attention - Pytorch Implementation of Invariant Point Attention as a standalone module, which was used in the structure module of Alph

Phil Wang 113 Jan 5, 2023
CVPR2021: Temporal Context Aggregation Network for Temporal Action Proposal Refinement

Temporal Context Aggregation Network - Pytorch This repo holds the pytorch-version codes of paper: "Temporal Context Aggregation Network for Temporal

Zhiwu Qing 63 Sep 27, 2022
Photographic Image Synthesis with Cascaded Refinement Networks - Pytorch Implementation

Photographic Image Synthesis with Cascaded Refinement Networks-Pytorch (https://arxiv.org/abs/1707.09405) This is a Pytorch implementation of cascaded

Soumya Tripathy 63 Mar 27, 2022