Official Implementation of HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation

Overview

HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation

by Lukas Hoyer, Dengxin Dai, and Luc Van Gool

[Arxiv] [Paper]

Overview

Unsupervised domain adaptation (UDA) aims to adapt a model trained on synthetic data to real-world data without requiring expensive annotations of real-world images. As UDA methods for semantic segmentation are usually GPU memory intensive, most previous methods operate only on downscaled images. We question this design as low-resolution predictions often fail to preserve fine details. The alternative of training with random crops of high-resolution images alleviates this problem but falls short in capturing long-range, domain-robust context information.

Therefore, we propose HRDA, a multi-resolution training approach for UDA, that combines the strengths of small high-resolution crops to preserve fine segmentation details and large low-resolution crops to capture long-range context dependencies with a learned scale attention, while maintaining a manageable GPU memory footprint.

HRDA Overview

HRDA enables adapting small objects and preserving fine segmentation details. It significantly improves the state-of-the-art performance by 5.5 mIoU for GTA→Cityscapes and by 4.9 mIoU for Synthia→Cityscapes, resulting in an unprecedented performance of 73.8 and 65.8 mIoU, respectively.

UDA over time

The more detailed domain-adaptive semantic segmentation of HRDA, compared to the previous state-of-the-art UDA method DAFormer, can also be observed in example predictions from the Cityscapes validation set.

Demo Color Palette

For more information on HRDA, please check our [Paper].

If you find HRDA useful in your research, please consider citing:

@Article{hoyer2022hrda,
  title={{HRDA}: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation},
  author={Hoyer, Lukas and Dai, Dengxin and Van Gool, Luc},
  journal={arXiv preprint arXiv:2204.13132},
  year={2022}
}

Setup Environment

For this project, we used python 3.8.5. We recommend setting up a new virtual environment:

python -m venv ~/venv/hrda
source ~/venv/hrda/bin/activate

In that environment, the requirements can be installed with:

pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html
pip install mmcv-full==1.3.7  # requires the other packages to be installed first

Further, please download the MiT weights from SegFormer using the following script. If problems occur with the automatic download, please follow the instructions for a manual download within the script.

sh tools/download_checkpoints.sh

Setup Datasets

Cityscapes: Please, download leftImg8bit_trainvaltest.zip and gt_trainvaltest.zip from here and extract them to data/cityscapes.

GTA: Please, download all image and label packages from here and extract them to data/gta.

Synthia: Please, download SYNTHIA-RAND-CITYSCAPES from here and extract it to data/synthia.

The final folder structure should look like this:

DAFormer
├── ...
├── data
│   ├── cityscapes
│   │   ├── leftImg8bit
│   │   │   ├── train
│   │   │   ├── val
│   │   ├── gtFine
│   │   │   ├── train
│   │   │   ├── val
│   ├── gta
│   │   ├── images
│   │   ├── labels
│   ├── synthia
│   │   ├── RGB
│   │   ├── GT
│   │   │   ├── LABELS
├── ...

Data Preprocessing: Finally, please run the following scripts to convert the label IDs to the train IDs and to generate the class index for RCS:

python tools/convert_datasets/gta.py data/gta --nproc 8
python tools/convert_datasets/cityscapes.py data/cityscapes --nproc 8
python tools/convert_datasets/synthia.py data/synthia/ --nproc 8

Testing & Predictions

The provided HRDA checkpoint trained on GTA->Cityscapes (already downloaded by tools/download_checkpoints.sh) can be tested on the Cityscapes validation set using:

sh test.sh work_dirs/gtaHR2csHR_hrda_246ef

The predictions are saved for inspection to work_dirs/gtaHR2csHR_hrda_246ef/preds and the mIoU of the model is printed to the console. The provided checkpoint should achieve 73.79 mIoU. Refer to the end of work_dirs/gtaHR2csHR_hrda_246ef/20220215_002056.log for more information such as the class-wise IoU.

If you want to visualize the LR predictions, HR predictions, or scale attentions of HRDA on the validation set, please refer to test.sh for further instructions.

Training

For convenience, we provide an annotated config file of the final HRDA. A training job can be launched using:

python run_experiments.py --config configs/hrda/gtaHR2csHR_hrda.py

The logs and checkpoints are stored in work_dirs/.

For the other experiments in our paper, we use a script to automatically generate and train the configs:

python run_experiments.py --exp <ID>

More information about the available experiments and their assigned IDs, can be found in experiments.py. The generated configs will be stored in configs/generated/.

When training a model on Synthia->Cityscapes, please note that the evaluation script calculates the mIoU for all 19 Cityscapes classes. However, Synthia contains only labels for 16 of these classes. Therefore, it is a common practice in UDA to report the mIoU for Synthia->Cityscapes only on these 16 classes. As the Iou for the 3 missing classes is 0, you can do the conversion mIoU16 = mIoU19 * 19 / 16.

Framework Structure

This project is based on mmsegmentation version 0.16.0. For more information about the framework structure and the config system, please refer to the mmsegmentation documentation and the mmcv documentation.

The most relevant files for HRDA are:

Acknowledgements

HRDA is based on the following open-source projects. We thank their authors for making the source code publicly available.

Comments
  • Configs for AdvSeg and MinEnt

    Configs for AdvSeg and MinEnt

    Dear Lukas: Thank you for your wonderful work and excellent code. Can you provide a configuration file that can be run directly using minent.py and advseg.py? Thank you very much.

    opened by Renp1ngs 11
  • Question about the inference phase

    Question about the inference phase

    I have calculated the necessary parameters in the inference phase. Is my calculation correct?

    Case1. If not using sliding window LR context crop, config is as follows: [1]test_cfg=dict(mode='whole')) If not using sliding window HR detail crop, config is as follows: [2]hr_slide_inference=False,

    1. The encoder is forwarded only once per image.
    2. According to [1], the LR context crop image(whole image) is forwarded to the decoder once.
    3. According to [2], the HR detail crop image(whole image) is forwarded to the decoder once.

    Q1. if the model parameter is 80M(encoder:60M, decoder20M), the parameter required to forward one image is 100M(60M + 20M(LR crop) + 20M(HR crop). Is that right?

    Case2. If not using sliding window LR context crop, config is as follows: [1]test_cfg=dict(mode='whole')) If using sliding window HR detail crop, config is as follows: [2]hr_slide_inference=True,

    1. The encoder is forwarded only once per image.
    2. According to [1], the LR context crop image(whole image) is forwarded to the decoder once.
    3. According to [2], the HR detail crop image is forwarded to the decoder as much as the slide size(N).

    Q2. if the model parameter is 80M(encoder:60M, decoder20M), the parameter required to forward one image is 60M + 20M(LR crop) + 20M x N(HR crop). Is that right?

    opened by JoinWorldMC 9
  • About Reproducing the results shown in the Paper

    About Reproducing the results shown in the Paper

    Dear Lukas,

    I am interested in your recent great work HRDA and thanks for sharing your code. I would like to run the code you provided and reproduce the results.

    I followed the setting in "experiments.py" file, but I found that the results I got can not match those provided in the Paper. Should I change some of the default setting to reach the results provided?

    The attached image is the experiment data I recorded for Table 1 in paper. I run it on a RTX 6000 GPU. table 1

    (The mIoUs for gta5 - cityscapes are 61, 66.92, 63.31 at 3 random seeds, and for synthia - cityscapes are 55.76, 55.17, 56.14 at 3 random seeds.)

    I also copied my environment information below:

    sys.platform: linux Python: 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] CUDA available: True GPU 0: Quadro RTX 6000 CUDA_HOME: /usr/local/cuda NVCC: Build cuda_11.2.r11.2/compiler.29618528_0 GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 PyTorch: 1.7.1+cu110 PyTorch compiling details: PyTorch built with:

    • GCC 7.3
    • C++ Version: 201402
    • Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
    • Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f)
    • OpenMP 201511 (a.k.a. OpenMP 4.5)
    • NNPACK is enabled
    • CPU capability usage: AVX2
    • CUDA Runtime 11.0
    • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80
    • CuDNN 8.0.5
    • Magma 2.5.2
    • Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

    TorchVision: 0.8.2+cu110 OpenCV: 4.4.0 MMCV: 1.3.7 MMCV Compiler: GCC 9.4 MMCV CUDA Compiler: 11.2 MMSegmentation: 0.16.0+a57d967

    Thank you so much!

    opened by kagawa588 5
  • something about the scale attention

    something about the scale attention

    Hi, I have some questions about scale attention.

    1. About the scale attention decoder, there seems to be some difference between paper and released code? Segformer decoder in the paper, DAFormer decoder in code. Will there be any difference in performance?
    2. In addition, can scale attention be understood as adding an additional segmentation head to process the context crop and get the result of the detail crop corresponding to the context crop? In The second paragraph on page 8, The scale attention decode... Is there something wrong with scale attention? It should be f^A(f^E(x_c))?
    opened by Renp1ngs 5
  • Impressive work, but still some issues

    Impressive work, but still some issues

    Hi, Dr. Hoyer. Thanks for your contribution to the community. This is indeed a nice work, which inspired me a lot. After reading the paper, may i summary the core idea is to combine the multi-resolutions to adapt context and fine-grained features. However, did you ever try to directly train on HR inputs via DAFormer? Moreover, the select of HR regions are random. So have you ever considered to select them according to sth. For there are certain feature distribution correlated with spatial.

    opened by BoltenWang-Meta 5
  • Some issue

    Some issue

    XIO: fatal IO error 25 (Inappropriate ioctl for device) on X server "localhost:12.0" after 387 requests (387 known processed) with 4 events remaining.

    opened by LuPaoPao 2
  • Last question. I look forward to your response.

    Last question. I look forward to your response.

    @lhoyer

    In the paper. Figure 2(b) includes "Reassemble".

    What I mean by forward pass is whether or not Reassemble is performed only once. That is, parts of the image should not be inferred multiple times.

    In Case 1, does Reassemble work once?

    opened by JoinWorldMC 2
  • Questions on Feature Distance

    Questions on Feature Distance

    Dear Lucas,

    I am interested in your recent great work HRDA and thanks for sharing your code. During reading it I have some questions about the module of feature distance. [HRDA/mmseg/models/uda/dacs.py]

    image

    From the Figure it can be seen that features from multiple input scales are used only when feature_scale in feature_scale_all_strs. However according to your provided config file, feature_scale = 0.5 while feature_scale_all_strs = ['all'], thus this module will never be executed.

    So are the features from multiple input scales not used during the training process?

    opened by HuayuWong 2
  • The performance of DAFormer in this repo.

    The performance of DAFormer in this repo.

    I tried training the DAFormer configuration and got 66.1 mIoU, slightly lower than the DAFormer reported. Is the DAFormer in this repository consistent with the original author's code. Or it could be due to a different version of CUDA. I used CUDA 10.2 due to my graphics driver version.

    opened by luyvlei 2
  • how long to train the model. If this code can run on 3090

    how long to train the model. If this code can run on 3090

    I am interested in this paper and I knwo transform framwork needs high equipment. So I want to know the time to train it. My device is not good. So I want to ask.

    opened by yuheyuan 2
  • Unfair comparison

    Unfair comparison

    In Figure 1 (c), you compare your method to ProDA and SAC, and your method is not based on DeeplabV2. Is this really a meaningful fair comparison.

    In Table 2, we can see that the mIoU based on DeeplabV2 only reaches 59.4, which is an ordinary precision.

    opened by wangyunnan 2
  • Using mixed precision during the training process.

    Using mixed precision during the training process.

    I used a single RTX 3090 to run the codes, but I got the error about the cuda out of memory. So that I want to run the codes using mixed precision. Where should I modify the codes to use the mixed precision during the training process. Thank you very much.

    opened by Caillen-W 2
  • CUDA out of memory upon the start of validation

    CUDA out of memory upon the start of validation

    Anyone has the issue of CUDA OOM when the validation starts?

    [                                                  ] 0/500, elapsed: 0s, ETA:Traceback (most recent call last):
      File "run_experiments.py", line 116, in <module>
        train.main([config_files[i]])
      File "/home/ubuntu/Zheng/Softwares/HRDA/tools/train.py", line 168, in main
        train_segmentor(
      File "/home/ubuntu/Zheng/Softwares/HRDA/mmseg/apis/train.py", line 131, in train_segmentor
        runner.run(data_loaders, cfg.workflow)
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/mmcv/runner/iter_based_runner.py", line 131, in run
        iter_runner(iter_loaders[i], **kwargs)
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/mmcv/runner/iter_based_runner.py", line 66, in train
        self.call_hook('after_train_iter')
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/mmcv/runner/base_runner.py", line 307, in call_hook
        getattr(hook, fn_name)(self)
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/mmcv/runner/hooks/evaluation.py", line 172, in after_train_iter
        self._do_evaluate(runner)
      File "/home/ubuntu/Zheng/Softwares/HRDA/mmseg/core/evaluation/eval_hooks.py", line 36, in _do_evaluate
        results = single_gpu_test(
      File "/home/ubuntu/Zheng/Softwares/HRDA/mmseg/apis/test.py", line 67, in single_gpu_test
        result = model(return_loss=False, **data)
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/mmcv/parallel/data_parallel.py", line 42, in forward
        return super().forward(*inputs, **kwargs)
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 159, in forward
        return self.module(*inputs[0], **kwargs[0])
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 97, in new_func
        return old_func(*args, **kwargs)
      File "/home/ubuntu/Zheng/Softwares/HRDA/mmseg/models/segmentors/base.py", line 112, in forward
        return self.forward_test(img, img_metas, **kwargs)
      File "/home/ubuntu/Zheng/Softwares/HRDA/mmseg/models/segmentors/base.py", line 94, in forward_test
        return self.simple_test(imgs[0], img_metas[0], **kwargs)
      File "/home/ubuntu/Zheng/Softwares/HRDA/mmseg/models/uda/uda_decorator.py", line 95, in simple_test
        return self.get_model().simple_test(img, img_meta, rescale)
      File "/home/ubuntu/Zheng/Softwares/HRDA/mmseg/models/segmentors/encoder_decoder.py", line 385, in simple_test
        seg_logit = self.inference(img, img_meta, rescale)
      File "/home/ubuntu/Zheng/Softwares/HRDA/mmseg/models/segmentors/encoder_decoder.py", line 362, in inference
        seg_logit = self.slide_inference(img, img_meta, rescale)
      File "/home/ubuntu/Zheng/Softwares/HRDA/mmseg/models/segmentors/encoder_decoder.py", line 280, in slide_inference
        crop_seg_logits = self.encode_decode(crop_imgs, img_meta)
      File "/home/ubuntu/Zheng/Softwares/HRDA/mmseg/models/segmentors/hrda_encoder_decoder.py", line 190, in encode_decode
        out = self._decode_head_forward_test(mres_feats, img_metas)
      File "/home/ubuntu/Zheng/Softwares/HRDA/mmseg/models/segmentors/encoder_decoder.py", line 173, in _decode_head_forward_test
        seg_logits = self.decode_head.forward_test(x, img_metas, self.test_cfg)
      File "/home/ubuntu/Zheng/Softwares/HRDA/mmseg/models/decode_heads/hrda_head.py", line 361, in forward_test
        test_results = self.forward(inputs)
      File "/home/ubuntu/Zheng/Softwares/HRDA/mmseg/models/decode_heads/hrda_head.py", line 277, in forward
        hr_seg = self.decode_hr(hr_inp, batch_size)
      File "/home/ubuntu/Zheng/Softwares/HRDA/mmseg/models/decode_heads/hrda_head.py", line 150, in decode_hr
        crop_seg_logits = self.head(features)
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/home/ubuntu/Zheng/Softwares/HRDA/mmseg/models/decode_heads/daformer_head.py", line 227, in forward
        x = self.fuse_layer(x)
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/home/ubuntu/Zheng/Softwares/HRDA/mmseg/models/decode_heads/daformer_head.py", line 76, in forward
        aspp_outs.extend(self.aspp_modules(x))
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/home/ubuntu/Zheng/Softwares/HRDA/mmseg/models/decode_heads/aspp_head.py", line 49, in forward
        aspp_outs.append(aspp_module(x))
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/mmcv/cnn/bricks/depthwise_separable_conv_module.py", line 93, in forward
        x = self.depthwise_conv(x)
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/mmcv/cnn/bricks/conv_module.py", line 200, in forward
        x = self.norm(x)
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/torch/nn/modules/batchnorm.py", line 131, in forward
        return F.batch_norm(
      File "/home/ubuntu/.conda/envs/new_da/lib/python3.8/site-packages/torch/nn/functional.py", line 2056, in batch_norm
        return torch.batch_norm(
    RuntimeError: CUDA out of memory. Tried to allocate 1.69 GiB (GPU 0; 14.56 GiB total capacity; 7.89 GiB already allocated; 1.01 GiB free; 12.49 GiB reserved in total by PyTorch)```
    opened by ArlenCHEN 1
  •  CUDA out of memory. how to change GPU, I want to specify a GPU device

    CUDA out of memory. how to change GPU, I want to specify a GPU device

    RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 23.70 GiB total capacity; 1.33 GiB already allocated; 5.00 MiB free; 1.40 GiB reserved in total by PyTorch)
    

    When I run daformer,It's Ok. But , I run HRDA, it occour CUDA out of memory. I want to change GPU 0 to GPU 1 But I don't know how to change it. Usually, I use code to specify GPU by code

    import os
    os.environ["CUDA_VISIBLE_DEVICES"] = '1'
    

    But it dosen't work in this work. I find code in your gtaHR2csHR_hrda.py

    n_gpus = 1
    gpu_model = 'NVIDIATITANRTX'
    

    is this gpu_model should change? My gpus are two 3090. So I want to know how to change GPU in this code. defualt is GPU 0 Or how to change configs to make the code successfully.

    Maybe GPU 1 is used, I specify GPU 1, but in pytorch the index of GPU 1 become GPU 0.Then it occour this problem.

    So, I want to know if 3090 can run this code. Or change the configs to make this run.

    opened by yuheyuan 2
Owner
Lukas Hoyer
Doctoral student at ETH Zurich
Lukas Hoyer
Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging

Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging This repository contains an implementation

Computational Photography Lab @ SFU 1.1k Jan 2, 2023
Official and maintained implementation of the paper "OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data" [BMVC 2021].

OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data Christoph Reich, Tim Prangemeier, Özdemir Cetin & Heinz Koeppl | Pr

Christoph Reich 23 Sep 21, 2022
Adaptive Pyramid Context Network for Semantic Segmentation (APCNet CVPR'2019)

Adaptive Pyramid Context Network for Semantic Segmentation (APCNet CVPR'2019) Introduction Official implementation of Adaptive Pyramid Context Network

null 21 Nov 9, 2022
Pytorch Implementation for NeurIPS (oral) paper: Pixel Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation

Pixel-Level Cycle Association This is the Pytorch implementation of our NeurIPS 2020 Oral paper Pixel-Level Cycle Association: A New Perspective for D

null 87 Oct 19, 2022
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)

Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CAC) Xin Lai*, Zhuotao Tian*, Li Jiang, Shu Liu, Hengshuang Zhao, Li

Jia Research Lab 137 Dec 14, 2022
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)

Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CAC) Xin Lai*, Zhuotao Tian*, Li Jiang, Shu Liu, Hengshuang Zhao, Li

DV Lab 137 Dec 14, 2022
Official PyTorch implementation of UACANet: Uncertainty Aware Context Attention for Polyp Segmentation

UACANet: Uncertainty Aware Context Attention for Polyp Segmentation Official pytorch implementation of UACANet: Uncertainty Aware Context Attention fo

Taehun Kim 85 Dec 14, 2022
Official PyTorch implementation of "VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization" (CVPR 2021)

VITON-HD — Official PyTorch Implementation VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization Seunghwan Choi*1, Sunghyun Pa

Seunghwan Choi 250 Jan 6, 2023
Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation (CVPR 2021)

Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation (CVPR 2021, official Pytorch implementatio

Microsoft 247 Dec 25, 2022
Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation

CorDA Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation Prerequisite Please create and activate the follo

Qin Wang 60 Nov 30, 2022
LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation (NeurIPS2021 Benchmark and Dataset Track)

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation by Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zh

Kingdrone 174 Dec 22, 2022
LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation by Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zh

Payphone 8 Nov 21, 2022
Official pytorch implementation of "Feature Stylization and Domain-aware Contrastive Loss for Domain Generalization" ACMMM 2021 (Oral)

Feature Stylization and Domain-aware Contrastive Loss for Domain Generalization This is an official implementation of "Feature Stylization and Domain-

null 22 Sep 22, 2022
Fast and Context-Aware Framework for Space-Time Video Super-Resolution (VCIP 2021)

Fast and Context-Aware Framework for Space-Time Video Super-Resolution Preparation Dependencies PyTorch 1.2.0 CUDA 10.0 DCNv2 cd model/DCNv2 bash make

Xueheng Zhang 1 Mar 29, 2022
Unofficial implementation of Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segmentation

Point-Unet This is an unofficial implementation of the MICCAI 2021 paper Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segment

Namt0d 9 Dec 7, 2022
:hot_pepper: R²SQL: "Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing." (AAAI 2021)

R²SQL The PyTorch implementation of paper Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing. (AAAI 2021) Requirement

huybery 60 Dec 31, 2022
Official implementation of the paper 'Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution'

DASR Paper Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution Jie Liang, Hui Zeng, and Lei Zhang. In arxiv preprint. Abs

null 81 Dec 28, 2022
An official implementation of the paper Exploring Sequence Feature Alignment for Domain Adaptive Detection Transformers

Sequence Feature Alignment (SFA) By Wen Wang, Yang Cao, Jing Zhang, Fengxiang He, Zheng-jun Zha, Yonggang Wen, and Dacheng Tao This repository is an o

WangWen 79 Dec 24, 2022
Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation)

Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation) Download Synthia dataset The model uses

null 32 Sep 21, 2022