Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)

Overview

Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CAC)

Xin Lai*, Zhuotao Tian*, Li Jiang, Shu Liu, Hengshuang Zhao, Liwei Wang, Jiaya Jia

This is the official PyTorch implementation of our paper Semi-supervised Semantic Segmentation with Directional Context-aware Consistency that has been accepted to 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021).

Highlight

Our method achives the state-of-the-art performance on semi-supervised semantic segmentation. Based on CCT, this Repository also supports efficient distributed training with multiple GPUs.

Get Started

Environment

The repository is tested on Ubuntu 18.04.3 LTS, Python 3.6.9, PyTorch 1.6.0 and CUDA 10.2

pip install -r requirements.txt

Datasets Preparation

  1. Firstly, download the PASCAL VOC Dataset, and the extra annotations from SegmentationClassAug.
  2. Extract the above compression files into your desired path, and make it follow the directory tree as below.
-VOCtrainval_11-May-2012
    -VOCdevkit
        -VOC2012
            -Annotations
            -ImageSets
            -JPEGImages
            -SegmentationClass
            -SegmentationClassAug
            -SegmentationObject
  1. Set 'data_dir' in the config file into '[YOUR_PATH]/VOCtrainval_11-May-2012'.

Training

Firsly, you should download the PyTorch ResNet101 or ResNet50 ImageNet-pretrained weight, and put it into the 'pretrained/' directory using the following commands.

cd Context-Aware-Consistency
mkdir pretrained
cd pretrained
wget https://download.pytorch.org/models/resnet50-19c8e357.pth # ResNet50
wget https://download.pytorch.org/models/resnet101-5d3b4d8f.pth # ResNet101

Run the following commands for training.

  • train the model on the 1/8 labeled data (the 0-th data list) of PASCAL VOC with the segmentation network and the backbone set to DeepLabv3+ and ResNet50 respectively.
python3 train.py --config configs/voc_cac_deeplabv3+_resnet50_1over8_datalist0.json
  • train the model on the 1/8 labeled data (the 0-th data list) of PASCAL VOC with the segmentation network and the backbone set to DeepLabv3+ and ResNet101 respectively.
python3 train.py --config configs/voc_cac_deeplabv3+_resnet101_1over8_datalist0.json

Testing

For testing, run the following command.

python3 train.py --config [CONFIG_PATH] --resume [CHECKPOINT_PATH] --test True

Related Repositories

This repository highly depends on the CCT repository at https://github.com/yassouali/CCT. We thank the authors of CCT for their great work and clean code.

Besides, we also borrow some codes from the following repositories.

Thanks a lot for their great work.

Citation

If you find this project useful, please consider citing:

@inproceedings{lai2021cac,
  title     = {Semi-supervised Semantic Segmentation with Directional Context-aware Consistency},
  author    = {Xin Lai, Zhuotao Tian, Li Jiang, Shu Liu, Hengshuang Zhao, Liwei Wang and Jiaya Jia},
  booktitle = {CVPR},
  year      = {2021}
}
Comments
  • Runtime Error when val

    Runtime Error when val

    thanks for your work. but i found an error when i try to test the code in VOC `Checkpoint <E:\Context-Aware-Consistency-master\pretrained\voc_1over8_datalist0_deeplabv3+_resnet101.pth> (epoch 63) was loaded

    EVALUATION

    0%| | 0/724 [00:17<?, ?it/s] Traceback (most recent call last): File "D:\ProgramData\Anaconda3\envs\cv\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "D:\ProgramData\Anaconda3\envs\cv\lib\runpy.py", line 85, in run_code exec(code, run_globals) File "c:\Users\Administrator.vscode\extensions\ms-python.python-2020.7.96456\pythonFiles\lib\python\debugpy_main.py", line 45, in cli.main() File "c:\Users\Administrator.vscode\extensions\ms-python.python-2020.7.96456\pythonFiles\lib\python\debugpy/..\debugpy\server\cli.py", line 430, in main run() File "c:\Users\Administrator.vscode\extensions\ms-python.python-2020.7.96456\pythonFiles\lib\python\debugpy/..\debugpy\server\cli.py", line 267, in run_file runpy.run_path(options.target, run_name=compat.force_str("main")) File "D:\ProgramData\Anaconda3\envs\cv\lib\runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "D:\ProgramData\Anaconda3\envs\cv\lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "D:\ProgramData\Anaconda3\envs\cv\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "e:\Context-Aware-Consistency-master\train.py", line 128, in main(config['n_gpu'], config['n_gpu'], config, args.resume, args.test) File "e:\Context-Aware-Consistency-master\train.py", line 99, in main trainer.train() File "e:\Context-Aware-Consistency-master\base\base_trainer.py", line 105, in train results = self._valid_epoch(0) File "e:\Context-Aware-Consistency-master\trainer.py", line 145, in _valid_epoch for batch_idx, (data, target) in enumerate(tbar): File "D:\ProgramData\Anaconda3\envs\cv\lib\site-packages\tqdm\std.py", line 1185, in iter for obj in iterable: File "D:\ProgramData\Anaconda3\envs\cv\lib\site-packages\torch\utils\data\dataloader.py", line 435, in next data = self._next_data() File "D:\ProgramData\Anaconda3\envs\cv\lib\site-packages\torch\utils\data\dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "D:\ProgramData\Anaconda3\envs\cv\lib\site-packages\torch\utils\data_utils\fetch.py", line 47, in fetch return self.collate_fn(data) File "D:\ProgramData\Anaconda3\envs\cv\lib\site-packages\torch\utils\data_utils\collate.py", line 83, in default_collate return [default_collate(samples) for samples in transposed] File "D:\ProgramData\Anaconda3\envs\cv\lib\site-packages\torch\utils\data_utils\collate.py", line 83, in return [default_collate(samples) for samples in transposed] File "D:\ProgramData\Anaconda3\envs\cv\lib\site-packages\torch\utils\data_utils\collate.py", line 55, in default_collate return torch.stack(batch, 0, out=out) RuntimeError: stack expects each tensor to be equal size, but got [3, 375, 500] at entry 0 and [3, 396, 500] at entry 1`

    opened by Chic-J 10
  • Question of training

    Question of training

    I try to train the model by only one GPU. But the process is ended by the following error :

    Traceback (most recent call last): File "train.py", line 127, in mp.spawn(main, nprocs=config['n_gpu'], args=(config['n_gpu'], config, args.resume, args.test)) File "/home/ders/anaconda3/envs/sdp/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/home/ders/anaconda3/envs/sdp/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes while not context.join(): File "/home/ders/anaconda3/envs/sdp/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 150, in join raise ProcessRaisedException(msg, error_index, failed_process.pid) torch.multiprocessing.spawn.ProcessRaisedException:

    -- Process 0 terminated with the following error: Traceback (most recent call last): File "/home/ders/anaconda3/envs/sdp/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap fn(i, *args) File "/media/ders/sundingpeng/paper_code/Context-Aware-Consistency-master-2/train.py", line 98, in main trainer.train() File "/media/ders/sundingpeng/paper_code/Context-Aware-Consistency-master-2/base/base_trainer.py", line 115, in train results = self._valid_epoch(epoch) File "/media/ders/sundingpeng/paper_code/Context-Aware-Consistency-master-2/trainer.py", line 145, in _valid_epoch for batch_idx, (data, target) in enumerate(tbar): File "/home/ders/anaconda3/envs/sdp/lib/python3.6/site-packages/tqdm/std.py", line 1178, in iter for obj in iterable: File "/home/ders/anaconda3/envs/sdp/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 517, in next data = self._next_data() File "/home/ders/anaconda3/envs/sdp/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data return self._process_data(data) File "/home/ders/anaconda3/envs/sdp/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data data.reraise() File "/home/ders/anaconda3/envs/sdp/lib/python3.6/site-packages/torch/_utils.py", line 429, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/ders/anaconda3/envs/sdp/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop data = fetcher.fetch(index) File "/home/ders/anaconda3/envs/sdp/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/home/ders/anaconda3/envs/sdp/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 83, in default_collate return [default_collate(samples) for samples in transposed] File "/home/ders/anaconda3/envs/sdp/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 83, in return [default_collate(samples) for samples in transposed] File "/home/ders/anaconda3/envs/sdp/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 55, in default_collate return torch.stack(batch, 0, out=out)

    RuntimeError: stack expects each tensor to be equal size, but got [3, 366, 500] at entry 0 and [3, 335, 500] at entry 1

    opened by sdp369 5
  • Directional Contrastive Loss calculation

    Directional Contrastive Loss calculation

    Hi X-Lai,

    Thank you very much for sharing your well-organized code and I'm trying to use your Directional Contrastive Loss in my work. I have a question on the calculation of Directional Contrastive Loss. Equation 1 in your conference paper has two items in the denominator, while the calculation in your code is as following: logits1 = torch.exp(pos1 - neg_max1).squeeze(-1) / (logits1_down + eps) # in model.py Line 231. Should it be 'torch.exp(pos1 - neg_max1).squeeze(-1) / (torch.exp(pos1 - eg_max1).squeeze(-1)+logits1_down + eps)'? Or 'torch.exp(pos1 - neg_max1).squeeze(-1)' is already added up to logits1_down.

    Ps: Is 'neg_max1' for normalization?

    I'm looking forward to your reply!

    opened by yaping222 4
  • Questions about _train_epoch in trainer.py

    Questions about _train_epoch in trainer.py

    if self.mode == 'supervised': # dataloader = iter(self.supervised_loader) # tbar = tqdm(range(len(self.supervised_loader)), ncols=135) dataloader = iter(cycle(self.supervised_loader)) tbar = tqdm(range(self.iter_per_epoch), ncols=135) else: dataloader = iter(zip(cycle(self.supervised_loader), cycle(self.unsupervised_loader))) tbar = tqdm(range(self.iter_per_epoch), ncols=135)

    The comment part is your original code. In the semi-supervised method, 'cycle' was used to expand the number of iterations of labeled images. Obviously the number of iterations in the fully-supervised is much less. I think this comparison may be unfair. What is your opinion or modification plan? Looking forward to your answer, thanks!

    opened by ciuzaak 4
  • CPU RAM explodes after ~60 iterations.

    CPU RAM explodes after ~60 iterations.

    Hello, I was training your model using:

    python3 train.py --config configs/voc_cac_deeplabv3+_resnet50_1over8_datalist0.json

    The training starts but till 60 iterations of the 1st epoch the RAM explodes and the system crashes.

    GPU: P100 16GB CPU RAM: 25gb Batch SIze: 2

    opened by nikhilbyte 4
  • My reproduce results is slightly lower

    My reproduce results is slightly lower

    Hello, I use the config file you provide to reproduce on Pascal Voc dataset. But I got somehow slightly lower results in multiple dataset split setting. My reproduce result are as following. 截屏2021-12-17 上午10 03 57 And config file used in my experiment is as following.

    {
        "name": "CAC",
        "experim_name": "cac_datalist0_1of8_3",
        "dataset": "voc",
        "data_dir": ###,
        "datalist": 3,
        "n_gpu": 4,
        "n_labeled_examples": 10582,
        "diff_lrs": true,
        "ramp_up": 0.1,
        "unsupervised_w": 30,
        "ignore_index": 255,
        "lr_scheduler": "Poly",
        "use_weak_lables":false,
        "weakly_loss_w": 0.4,
        "pretrained": true,
        "random_seed": 42,
    
        "model":{
            "supervised": false,
            "semi": true,
            "supervised_w": 1,
    
            "sup_loss": "CE",
    
            "layers": 101,
            "downsample": true,
            "proj_final_dim": 128,
            "out_dim": 256,
            "backbone": "deeplab_v3+",
            "pos_thresh_value": 0.75,
            "weight_unsup": 0.1,
            "epoch_start_unsup": 5,
            "selected_num": 3200,
            "temp": 0.1,
            "step_save": 2,
            "stride": 8
        },
    
    
        "optimizer": {
            "type": "SGD",
            "args":{
                "lr": 0.01,
                "weight_decay": 1e-4,
                "momentum": 0.9
            }
        },
    
        "train_supervised": {
            "batch_size": 8,
            "crop_size": 320,
            "shuffle": true,
            "base_size": 400,
            "scale": true,
            "augment": true,
            "flip": true,
            "rotate": false,
            "blur": false,
            "split": "train_supervised",
            "num_workers": 8
        },
    
        "train_unsupervised": {
            "batch_size": 8,
            "crop_size": 320,
            "shuffle": true,
            "base_size": 400,
            "scale": true,
            "augment": true,
            "flip": true,
            "rotate": false,
            "blur": false,
            "split": "train_unsupervised",
            "num_workers": 8,
            "iou_bound": [0.1, 1.0],
            "stride": 8
        },
    
        "val_loader": {
            "batch_size": 4,
            "val": true,
            "split": "val",
            "shuffle": false,
            "num_workers": 4
        },
    
        "trainer": {
            "epochs": 80,
            "save_dir": "saved/",
            "save_period": 1,
      
            "monitor": "max Mean_IoU",
            "early_stop": 100,
            
            "tensorboardX": true,
            "log_dir": "saved/",
            "log_per_iter": 20,
    
            "val": true,
            "val_per_epochs": 1
        }
    }
    

    Could you give me some advice about how to correctly reproduce your results? Thanks a lot.

    opened by YanFangCS 3
  • Question about parameter

    Question about parameter "selected_num"

    I get ValueError when I try to reproduce your paper results on Pascal Voc dataset. Concretely, it raises "ValueError: Cannot take a larger sample than population when 'replace=False'" when use default selected_num setting with value of 6400. And error aforementioned occurs at following line. https://github.com/dvlab-research/Context-Aware-Consistency/blob/4fdec7af8ad22eaabbf852727f2d824b66069999/models/model.py#L174 I don't know how to solve this bug. Hope for your solution.

    opened by YanFangCS 3
  • About the ablation study

    About the ablation study

    Thank you for your amazing job.

    I have one question about your ablation results. As in Table 3, 1634628819(1) you applied Proj+Context+L2 regularization in Exp.1. However, the performance is lower than SupOnly.

    Could you please give me some more explanation about this phenomenon?

    opened by JoyHuYY1412 3
  • Understanding the UnSupervised Dataloader

    Understanding the UnSupervised Dataloader

    Hey @X-Lai ,

    Thank you for sharing your work!

    I was able to set up the repository and run the experiments in accordance with the steps provided. However , I am finding it difficult to understand some part of the code related to unsupervised data loader. Please find my queries below:

    1. Why have we chosen 320x320 for VOC dataset and 720x720 for Cityscapes? Just wanted to understand the rational behind it.
    2. In code, in file dataloaders/voc.py:
            overlap1_ul = [max(0, y2-y1), max(0, x2-x1)]
            overlap1_br = [min(self.crop_size, self.crop_size+y2-y1, h//self.stride * self.stride), min(self.crop_size, self.crop_size+x2-x1, w//self.stride * self.stride)]
            overlap2_ul = [max(0, y1-y2), max(0, x1-x2)]
            overlap2_br = [min(self.crop_size, self.crop_size+y1-y2, h//self.stride * self.stride), min(self.crop_size, self.crop_size+x1-x2, w//self.stride * self.stride)]
    

    I am not quite able to understand the utility of self.stride, why is this necessary and what exactly does overlap1_ul and overlap1_br represent

    Regards Nitin

    opened by nbansal90 2
  • Bugs in Learning Rate Scheduler?

    Bugs in Learning Rate Scheduler?

    image

    May the learning rate used not 'poly'?

    def get_lr(self): T = self.last_epoch * self.iters_per_epoch + self.cur_iter factor = pow((1 - 1.0 * T / self.N), 0.9) if self.warmup_iters > 0 and T < self.warmup_iters: factor = 1.0 * T / self.warmup_iters self.cur_iter %= self.iters_per_epoch self.cur_iter += 1 assert factor >= 0, 'error in lr_scheduler' return [base_lr * factor for base_lr in self.base_lrs]

    However, the lr updated as follows, self.lr_scheduler.step(epoch=epoch-1)

    Is that a bug or a specific design? https://github.com/pytorch/pytorch/blob/41054f2ab5bb39d28a3eb8497f1a65b42385a996/torch/optim/lr_scheduler.py#L155

    opened by TiankaiHang 2
  • Baselines compared ...

    Baselines compared ...

    Hi, thanks for your nice work!

    You have listed some results of other methods (such as CCT), have you just re-implement the method under your settings?

    As we know, the splits of dataset differ in those semi-sup papers.

    Thanks.

    opened by TiankaiHang 2
  • Training problems

    Training problems

    image This error will be reported at 20% of the first round of training. Can you tell me the reason? How to solve it? My configuration is two pieces of 2080ti and 64g memory

    opened by ChrisLiang2020 0
  • Add hubconf.py and automatic downloading of some pre-trained parameters

    Add hubconf.py and automatic downloading of some pre-trained parameters

    This adds hubconf.py so that, for example, the following can work when "Ivan1248" is replaced with "dvlab-research":

    model = torch.hub.load('Ivan1248/Context-Aware-Consistency', 'DeepLabV3Plus', backbone='resnet50',
                           pretrained=True, num_classes=21)
    
    opened by Ivan1248 0
  • The problem of image shape

    The problem of image shape

    when use "python3 train.py --config configs/voc_cac_deeplabv3+_resnet50_1over8_datalist0.json" with two GPUS to run , i find that one GPUS imput image size is (1,3,335,500) while another is (1,3,366,500) . in this case ,i can run to end. but ,when i run with only one GPU,the problem of diffrent size is occr. how is go ???

    opened by xiewende 4
Owner
Jia Research Lab
Research lab focusing on CV led by Prof. Jiaya Jia
Jia Research Lab
Self-supervised Augmentation Consistency for Adapting Semantic Segmentation (CVPR 2021)

Self-supervised Augmentation Consistency for Adapting Semantic Segmentation This repository contains the official implementation of our paper: Self-su

Visual Inference Lab @TU Darmstadt 132 Dec 21, 2022
Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation (CVPR 2021)

Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation Input Image Initial CAM Successive Maps with adversar

Jungbeom Lee 110 Dec 7, 2022
[CVPR 2021] Semi-Supervised Semantic Segmentation with Cross Pseudo Supervision

TorchSemiSeg [CVPR 2021] Semi-Supervised Semantic Segmentation with Cross Pseudo Supervision by Xiaokang Chen1, Yuhui Yuan2, Gang Zeng1, Jingdong Wang

Chen XiaoKang 387 Jan 8, 2023
Shape-aware Semi-supervised 3D Semantic Segmentation for Medical Images

SASSnet Code for paper: Shape-aware Semi-supervised 3D Semantic Segmentation for Medical Images(MICCAI 2020) Our code is origin from UA-MT You can fin

klein 125 Jan 3, 2023
[CVPR 2022] Semi-Supervised Semantic Segmentation Using Unreliable Pseudo-Labels

Using Unreliable Pseudo Labels Official PyTorch implementation of Semi-Supervised Semantic Segmentation Using Unreliable Pseudo Labels, CVPR 2022. Ple

Haochen Wang 268 Dec 24, 2022
Official Implementation of HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation

HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation by Lukas Hoyer, Dengxin Dai, and Luc Van Gool [Arxiv] [Paper] Overview Unsup

Lukas Hoyer 149 Dec 28, 2022
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018

Learning Pixel-level Semantic Affinity with Image-level Supervision This code is deprecated. Please see https://github.com/jiwoon-ahn/irn instead. Int

Jiwoon Ahn 337 Dec 15, 2022
[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

[ICCV 2021] A Simple Baseline for Semi-supervised Semantic Segmentation with Strong Data Augmentation

CodingMan 45 Dec 12, 2022
[CVPR 2021] MiVOS - Mask Propagation module. Reproduced STM (and better) with training code :star2:. Semi-supervised video object segmentation evaluation.

MiVOS (CVPR 2021) - Mask Propagation Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang [arXiv] [Paper PDF] [Project Page] [Papers with Code] This repo impleme

Rex Cheng 106 Jan 3, 2023
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

NingWang 236 Dec 22, 2022
ISBI 2022: Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image.

Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image Introduction This repository contains the PyTorch implem

null 25 Nov 9, 2022
Adaptive Pyramid Context Network for Semantic Segmentation (APCNet CVPR'2019)

Adaptive Pyramid Context Network for Semantic Segmentation (APCNet CVPR'2019) Introduction Official implementation of Adaptive Pyramid Context Network

null 21 Nov 9, 2022
Context Decoupling Augmentation for Weakly Supervised Semantic Segmentation

Context Decoupling Augmentation for Weakly Supervised Semantic Segmentation The code of: Context Decoupling Augmentation for Weakly Supervised Semanti

null 54 Dec 12, 2022
Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency[ECCV 2020]

Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency(ECCV 2020) This is an official python implementati

null 304 Jan 3, 2023
ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation

ST++ This is the official PyTorch implementation of our paper: ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation. Lihe Ya

Lihe Yang 147 Jan 3, 2023
[cvpr22] Perturbed and Strict Mean Teachers for Semi-supervised Semantic Segmentation

PS-MT [cvpr22] Perturbed and Strict Mean Teachers for Semi-supervised Semantic Segmentation by Yuyuan Liu, Yu Tian, Yuanhong Chen, Fengbei Liu, Vasile

Yuyuan Liu 132 Jan 3, 2023
Code for the paper One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation, CVPR 2021.

One Thing One Click One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation (CVPR2021) Code for the paper One Thi

null 44 Dec 12, 2022
Semi-supervised Video Deraining with Dynamical Rain Generator (CVPR, 2021, Pytorch)

S2VD Semi-supervised Video Deraining with Dynamical Rain Generator (CVPR, 2021) Requirements and Dependencies Ubuntu 16.04, cuda 10.0 Python 3.6.10, P

Zongsheng Yue 53 Nov 23, 2022
Official PyTorch implementation of UACANet: Uncertainty Aware Context Attention for Polyp Segmentation

UACANet: Uncertainty Aware Context Attention for Polyp Segmentation Official pytorch implementation of UACANet: Uncertainty Aware Context Attention fo

Taehun Kim 85 Dec 14, 2022