Perception-aware multi-sensor fusion for 3D LiDAR semantic segmentation (ICCV 2021)

Related tags

Deep Learning PMF
Overview

Perception-Aware Multi-Sensor Fusion for 3D LiDAR Semantic Segmentation (ICCV 2021)

[中文|EN]

概述

本工作主要探索一种高效的多传感器(激光雷达和摄像头)融合点云语义分割方法。现有的多传感器融合方法主要将点云投影到图像上,获取对应的像素位置之后,将对应位置的图像信息投影回点云空间进行特征融合。但是,这种方式下并不能很好的利用图像丰富的视觉感知特征(例如形状、纹理等)。因此,我们尝试探索一种在RGB图像空间进行特征融合的方式,提出了一个基于视觉感知的多传感器融合方法(PMF)。详细内容可以查看我们的公开论文。

image-20211013141408045

主要实验结果

PWC

Leader board of SensatUrban@ICCV2021

image-20211013144333265

更多实验结果

我们在持续探索PMF框架的潜力,包括探索更大的模型、更好的ImageNet预训练模型、其他的数据集等。我们的实验结果证明了,PMF框架是易于拓展的,并且其性能可以通过使用更好的主干网络而实现提升。详细的说明可以查看文件

方法 数据集 mIoU (%)
PMF-ResNet34 SemanticKITTI Validation Set 63.9
PMF-ResNet34 nuScenes Validation Set 76.9
PMF-ResNet50 nuScenes Validation Set 79.4
PMF48-ResNet101 SensatUrban Test Set (ICCV2021 Competition) 66.2 (排名 5)

使用说明

注:代码中涉及到包括数据集在内的各种路径配置,请根据自己的实际路径进行修改

代码结构

|--- pc_processor/ 点云处理的Python包
	|--- checkpoint/ 生成实验结果目录
	|--- dataset/ 数据集处理
	|--- layers/ 常用网络层
	|--- loss/ 损失函数
	|--- metrices/ 模型性能指标函数
	|--- models/ 网络模型
	|--- postproc/ 后处理,主要是KNN
	|--- utils/ 其他函数
|--- tasks/ 实验任务
	|--- pmf/ PMF 训练源代码
	|--- pmf_eval_nuscenes/ PMF 模型在nuScenes评估代码
		|--- testset_eval/ 合并PMF以及salsanext结果并在nuScenes测试集上评估
		|--- xxx.py PMF 模型在nuScenes评估代码
	|--- pmf_eval_semantickitti/ PMF 在SemanticKITTI valset上评估代码
	|--- salsanext/ SalsaNext 训练代码,基于官方公开代码进行修改
	|--- salsanext_eval_nuscenes/ SalsaNext 在nuScenes 数据集上评估代码

模型训练

训练任务代码目录结构

|--- pmf/
	|--- config_server_kitti.yaml SemanticKITTI数据集训练的配置脚本
	|--- config_server_nus.yaml nuScenes数据集训练的配置脚本
	|--- main.py 主函数
	|--- trainer.py 训练代码
	|--- option.py 配置解析代码
	|--- run.sh 执行脚本,需要 chmod+x 赋予可执行权限

步骤

  1. 进入 tasks/pmf目录,修改配置文件 config_server_kitti.yaml中数据集路径 data_root 为实际数据集路径。如果有需要可以修改gpubatch_size等参数
  2. 修改 run.sh 确保 nproc_per_node 的数值与yaml文件中配置的gpu数量一致
  3. 运行如下指令执行训练脚本
./run.sh
# 或者 bash run.sh
  1. 执行成功之后会在 PMF/experiments/PMF-SemanticKitti路径下自动生成实验日志文件,目录结构如下:
|--- log_dataset_network_xxxx/
	|--- checkpoint/ 训练断点文件以及最佳模型参数
	|--- code/ 代码备份
	|--- log/ 控制台输出日志以及配置文件副本
	|--- events.out.tfevents.xxx tensorboard文件

控制台输出内容如下,其中最后的输出时间为实验预估时间

image-20211013152939956

模型推理

模型推理代码目录结构

|--- pmf_eval_semantickitti/ SemanticKITTI评估代码
	|--- config_server_kitti.yaml 配置脚本
	|--- infer.py 推理脚本
	|--- option.py 配置解析脚本

步骤

  1. 进入 tasks/pmf_eval_semantickitti目录,修改配置文件 config_server_kitti.yaml中数据集路径 data_root 为实际数据集路径。修改pretrained_path指向训练生成的日志文件夹目录。
  2. 运行如下命令执行脚本
python infer.py config_server_kitti.yaml
  1. 运行成功之后,会在训练模型所在目录下生成评估结果日志文件,文件夹目录结构如下:
|--- PMF/experiments/PMF-SemanticKitti/log_xxxx/ 训练结果路径
	|--- Eval_xxxxx/ 评估结果路径
		|--- code/ 代码备份
		|--- log/ 控制台日志文件
		|--- pred/ 用于提交评估的文件

引用

@InProceedings{Zhuang_2021_ICCV,
    author    = {Zhuang, Zhuangwei and Li, Rong and Jia, Kui and Wang, Qicheng and Li, Yuanqing and Tan, Mingkui},
    title     = {Perception-Aware Multi-Sensor Fusion for 3D LiDAR Semantic Segmentation},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {16280-16290}
}
Comments
  • about experiment environment

    about experiment environment

    Hello, I'm sorry to disturb you. According to the paper, I only konw about you did it under the GeForce RTX 3090. But some errors occurred when I wanted to reproduce your experiment. So could you please tell me exactly what's your experiment environment? (Something like requirements.txt )

    opened by Towiko 9
  • training time

    training time

    Thanks for your amazing work, and I'm care about the time of training consuming. From the config_server_kitti.yaml, look like use 4x3090 GPU with batch size 8. Would you like to share your training time? More, limited by hardware,do you think that can get comparable performance to the paper report using a single 3090? In addition, is it possible to use amp in torch?

    opened by huixiancheng 8
  • questiones about the model performance

    questiones about the model performance

    Hello, thanks you for sharing the code.

    I have run the pmf model in semantic kitti training set using the default settings. However, the evaluation result in validation set is lower than the value 63.9 reported in the paper. image

    image

    Meanwhile, I also run the salsanext model provied in the codebase in semantic kitti training set using the default settings. The evaluation result in validation set is higer than the value 59.4 reported in the paper and the pmf model. image

    Could you give some suggestions?

    opened by ideasplus 7
  • UnboundLocalError: local variable 'mean_acc' referenced before assignment

    UnboundLocalError: local variable 'mean_acc' referenced before assignment

    when i run PMF code on Sensat Urban Dataset,it show that “UnboundLocalError: local variable 'mean_acc' referenced before assignment” and "UnboundLocalError: local variable 'lr' referenced before assignment"。How can I solve this problem? Complete error reporting information is as follows

    /home/yczhou/anaconda3/envs/buct-bishe/lib/python3.7/site-packages/torch/distributed/launch.py:186: FutureWarning: The module torch.distributed.launch is deprecated
    and will be removed in future. Use torchrun.
    Note that --use_env is set by default in torchrun.
    If your script expects `--local_rank` argument to be set, please
    change it to read from `os.environ['LOCAL_RANK']` instead. See 
    https://pytorch.org/docs/stable/distributed.html#launch-utility for 
    further instructions
    
      FutureWarning,
    WARNING:torch.distributed.run:
    *****************************************
    Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
    *****************************************
    | distributed init (rank 1): env://
    | distributed init (rank 2): env://
    | distributed init (rank 0): env://
    >> Init a recoder at  ../../../experiments/PMF-sensat/log_SensatUrban_PMFNet-resnet101_bs12-lr0.001_baseline-timestamp
    loading data frame...
    0it [00:00, ?it/s]
    Using 0 data frame from train split
    loading data frame...
    0it [00:00, ?it/s]
    Using 0 data frame from val split
    Generate 0 samples from train split
    Generate 0 samples from val split
    loading data frame...
    0it [00:00, ?it/s]
    Using 0 data frame from train split
    loading data frame...
    0it [00:00, ?it/s]
    Using 0 data frame from val split
    Generate 0 samples from train split
    Generate 0 samples from val split
    focal_loss alpha: [ 0.   1.   1.   1.   2.   2.5  1.   3.   1.   1.   1.   1.  10.   2.5]
    loading data frame...
    0it [00:00, ?it/s]
    Using 0 data frame from train split
    loading data frame...
    0it [00:00, ?it/s]
    Using 0 data frame from val split
    Generate 0 samples from train split
    Generate 0 samples from val split
    [IOU EVAL] IGNORE:  tensor([0])
    [IOU EVAL] INCLUDE:  tensor([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13])
    [IOU EVAL] IGNORE:  tensor([0])
    [IOU EVAL] INCLUDE:  tensor([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13])
    /home/yczhou/anaconda3/envs/buct-bishe/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:136: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`.  Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
      "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
    ===init env success===
    [IOU EVAL] IGNORE:  tensor([0])
    [IOU EVAL] INCLUDE:  tensor([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13])
    [IOU EVAL] IGNORE:  tensor([0])
    [IOU EVAL] INCLUDE:  tensor([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13])
    /home/yczhou/anaconda3/envs/buct-bishe/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:136: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`.  Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
      "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
    [IOU EVAL] IGNORE:  ===init env success===
    tensor([0])
    [IOU EVAL] INCLUDE:  tensor([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13])
    [IOU EVAL] IGNORE:  tensor([0])
    [IOU EVAL] INCLUDE:  tensor([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13])
    /home/yczhou/anaconda3/envs/buct-bishe/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:136: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`.  Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
      "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
    ===init env success===
    Traceback (most recent call last):
      File "main.py", line 148, in <module>
        exp.run()
      File "main.py", line 99, in run
        self.trainer.run(epoch, mode="Train")
      File "/home/yczhou/PMF-master/tasks/sensat_urban/pmf/trainer.py", line 528, in run
        "Acc": mean_acc.item(),
    UnboundLocalError: local variable 'mean_acc' referenced before assignment
    Traceback (most recent call last):
      File "main.py", line 148, in <module>
        exp.run()
      File "main.py", line 99, in run
        self.trainer.run(epoch, mode="Train")
      File "/home/yczhou/PMF-master/tasks/sensat_urban/pmf/trainer.py", line 528, in run
        "Acc": mean_acc.item(),
    UnboundLocalError: local variable 'mean_acc' referenced before assignment
    Traceback (most recent call last):
      File "main.py", line 148, in <module>
        exp.run()
      File "main.py", line 99, in run
        self.trainer.run(epoch, mode="Train")
      File "/home/yczhou/PMF-master/tasks/sensat_urban/pmf/trainer.py", line 448, in run
        tag="{}_lr".format(mode), scalar_value=lr, global_step=epoch)
    UnboundLocalError: local variable 'lr' referenced before assignment
    ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 48867) of binary: /home/yczhou/anaconda3/envs/buct-bishe/bin/python
    Traceback (most recent call last):
      File "/home/yczhou/anaconda3/envs/buct-bishe/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/home/yczhou/anaconda3/envs/buct-bishe/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home/yczhou/anaconda3/envs/buct-bishe/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in <module>
        main()
      File "/home/yczhou/anaconda3/envs/buct-bishe/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main
        launch(args)
      File "/home/yczhou/anaconda3/envs/buct-bishe/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch
        run(args)
      File "/home/yczhou/anaconda3/envs/buct-bishe/lib/python3.7/site-packages/torch/distributed/run.py", line 718, in run
        )(*cmd_args)
      File "/home/yczhou/anaconda3/envs/buct-bishe/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
        return launch_agent(self._config, self._entrypoint, list(args))
      File "/home/yczhou/anaconda3/envs/buct-bishe/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 247, in launch_agent
        failures=result.failures,
    torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
    ============================================================
    main.py FAILED
    ------------------------------------------------------------
    Failures:
    [1]:
      time      : 2022-04-15_18:58:42
      host      : zkti
      rank      : 1 (local_rank: 1)
      exitcode  : 1 (pid: 48868)
      error_file: <N/A>
      traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
    [2]:
      time      : 2022-04-15_18:58:42
      host      : zkti
      rank      : 2 (local_rank: 2)
      exitcode  : 1 (pid: 48869)
      error_file: <N/A>
      traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
    ------------------------------------------------------------
    Root Cause (first observed failure):
    [0]:
      time      : 2022-04-15_18:58:42
      host      : zkti
      rank      : 0 (local_rank: 0)
      exitcode  : 1 (pid: 48867)
      error_file: <N/A>
      traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
    ============================================================
    
    opened by YellowPuppy 4
  • About calib.txt in SemanticKITTI

    About calib.txt in SemanticKITTI

    Hello author, according to the read_calib(calib_path) function of pc_processor->dataset->semantc_kitti->parser in your code,

    def read_calib(calib_path):
        """
        :param calib_path: Path to a calibration text file.
        :return: dict with calibration matrices.
        """
        calib_all = {}
        with open(calib_path, 'r') as f:
            for line in f.readlines():
                if line == '\n':
                    break
                key, value = line.split(':', 1)
                calib_all[key] = np.array([float(x) for x in value.split()])
    calib_out = {}
    # 3x4 projection matrix for left camera
    calib_out['P2'] = calib_all['P2'].reshape(3, 4)
    calib_out['Tr'] = np.identity(4)  # 4x4 matrix
    calib_out['Tr'][:3, :4] = calib_all['Tr'].reshape(3, 4)
    return calib_out
    

    Does your calib.txt file already contain the ''Tr'' value? Why do I download only '''P0''', '''P1''', '''P2'', '''''P3' from the official website?

    opened by 2311762665 3
  • When shall we use pcd_aug?

    When shall we use pcd_aug?

    Thanks in advance for sharing your code for public research. The code is very standardized and easy to understand.

    For my understanding, ./pc_processor/dataset/preprocess/augmentor.py is to preform PC augmenting operation such as flip, translation, etc. However, my concern is whether this will break image matching ( Lidar to camera2 matrix). Then I noticed that you have set the value of pcd_aug to false in ./tasks/pmf/trainer.py.

    My questions:

    1. Do you adopt pcd_aug in training?
    2. In what situations should we adopt pcd_aug?

    Looking forward to your reply, thanks.

    opened by hadonga 3
  • corresponding images dataset download link

    corresponding images dataset download link

    Hello Dear author, I only found the lidar data of the semanti kitti dataset, can I get its corresponding images dataset download link? or Nuscenes,... Sorry, I really can't find it.

    Thanks!

    opened by emilyemliyM 3
  • PMF-ResNet50 on SemanticKITTI Validation Set?

    PMF-ResNet50 on SemanticKITTI Validation Set?

    Hi~

    Thanks for the open-source repo of your excellent work!

    I notice that PMF-ResNet50 significantly outperforms PMF-ResNet34 on nuScenes Validation Set, and you even adopt ResNet101 on SensatUrban Test Set.

    However, the result of PMF-ResNet50 (or deeper backbone) on SemanticKITTI Validation Set is unavailable. Did you try it before? Intuitively, it will also bring gains. Or did I miss something important?

    opened by haibo-qiu 3
  • Semantic Kitti Dataset

    Semantic Kitti Dataset

    Can someone help me with the dataset structure of Semantic Kitti and where can I find the datasets, and how many images to be used per sequence of left and right?

    I am very confused

    opened by kishorsabarishg 1
  • Request source code or results of image-only adversarial analysis

    Request source code or results of image-only adversarial analysis

    Hello author, thank you for your contribution to the community! I would like to ask you how to implement adversarial analysis of camera-only FCN based methods in ablation experiments, I am strongly curious. In my work, I want to implement it, if possible, I hope you can share the source code of the relevant part, or would you like to provide the prediction visualization results of 002777.png of the validation set 08? As shown below: PMF 08_002777 adversarial_08_002777

    opened by 2311762665 1
  • only one element tensors can be converted to Python scalars

    only one element tensors can be converted to Python scalars

    Thank you very much for your work, and I am getting errors like this during training: only one element tensors can be converted to Python scalars 0582950fbc7fdecb442b400e201b66e

    It can be trained normally at the beginning, but this error is reported after a few hundred iters. Have you ever met? HELP

    opened by Chang-007 1
  • How to assign semantic prediction values to points projected beyond the images

    How to assign semantic prediction values to points projected beyond the images

    Hi!

    One question really haunts me about the experimental results of nuScenes. How to assign semantic prediction values to points projected beyond the images?

    Looking forward to your reply.

    opened by jialeli1 0
  • Hello, is there any code to calculate the speed of model inference in the modified project? I

    Hello, is there any code to calculate the speed of model inference in the modified project? I

    Hello, is there any code to calculate the speed of model inference in the modified project? I want to evaluate the fps indicator of the model.Thank you very much!

    opened by huangwan-jiayi 0
Owner
ICE
Model compression; Object detection; Point cloud processing;
ICE
TorchDistiller - a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

This project is a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

yifan liu 147 Dec 3, 2022
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

null 308 Jan 4, 2023
Code for "FPS-Net: A convolutional fusion network for large-scale LiDAR point cloud segmentation".

FPS-Net Code for "FPS-Net: A convolutional fusion network for large-scale LiDAR point cloud segmentation", accepted by ISPRS journal of Photogrammetry

null 15 Nov 30, 2022
[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion

[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion

Rex Cheng 364 Jan 3, 2023
Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021)

Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion (CVPR 2021) This repository is for BAAF-Net introduce

null 90 Dec 29, 2022
Demo code for ICCV 2021 paper "Sensor-Guided Optical Flow"

Sensor-Guided Optical Flow Demo code for "Sensor-Guided Optical Flow", ICCV 2021 This code is provided to replicate results with flow hints obtained f

null 10 Mar 16, 2022
CLOCs: Camera-LiDAR Object Candidates Fusion for 3D Object Detection

CLOCs is a novel Camera-LiDAR Object Candidates fusion network. It provides a low-complexity multi-modal fusion framework that improves the performance of single-modality detectors. CLOCs operates on the combined output candidates of any 3D and any 2D detector, and is trained to produce more accurate 3D and 2D detection results.

Su Pang 254 Dec 16, 2022
This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object Detection, built on SECOND.

3D-CVF This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object

YecheolKim 97 Dec 20, 2022
An extremely simple, intuitive, hardware-friendly, and well-performing network structure for LiDAR semantic segmentation on 2D range image. IROS21

FIDNet_SemanticKITTI Motivation Implementing complicated network modules with only one or two points improvement on hardware is tedious. So here we pr

YimingZhao 54 Dec 12, 2022
Scribble-Supervised LiDAR Semantic Segmentation, CVPR 2022 (ORAL)

Scribble-Supervised LiDAR Semantic Segmentation Dataset and code release for the paper Scribble-Supervised LiDAR Semantic Segmentation, CVPR 2022 (ORA

null 102 Dec 25, 2022
Fusion-DHL: WiFi, IMU, and Floorplan Fusion for Dense History of Locations in Indoor Environments

Fusion-DHL: WiFi, IMU, and Floorplan Fusion for Dense History of Locations in Indoor Environments Paper: arXiv (ICRA 2021) Video : https://youtu.be/CC

Sachini Herath 68 Jan 3, 2023
Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019)

Dynamic Multi-scale Filters for Semantic Segmentation (DMNet ICCV'2019) Introduction Official implementation of Dynamic Multi-scale Filters for Semant

null 23 Oct 21, 2022
DFFNet: An IoT-perceptive Dual Feature Fusion Network for General Real-time Semantic Segmentation

DFFNet Paper DFFNet: An IoT-perceptive Dual Feature Fusion Network for General Real-time Semantic Segmentation. Xiangyan Tang, Wenxuan Tu, Keqiu Li, J

null 4 Sep 23, 2022
Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation)

Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation) Download Synthia dataset The model uses

null 32 Sep 21, 2022
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)

Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CAC) Xin Lai*, Zhuotao Tian*, Li Jiang, Shu Liu, Hengshuang Zhao, Li

Jia Research Lab 137 Dec 14, 2022
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)

Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CAC) Xin Lai*, Zhuotao Tian*, Li Jiang, Shu Liu, Hengshuang Zhao, Li

DV Lab 137 Dec 14, 2022
Self-supervised Multi-modal Hybrid Fusion Network for Brain Tumor Segmentation

JBHI-Pytorch This repository contains a reference implementation of the algorithms described in our paper "Self-supervised Multi-modal Hybrid Fusion N

FeiyiFANG 5 Dec 13, 2021
An official implementation of "Exploiting a Joint Embedding Space for Generalized Zero-Shot Semantic Segmentation" (ICCV 2021) in PyTorch.

Exploiting a Joint Embedding Space for Generalized Zero-Shot Semantic Segmentation This is an official implementation of the paper "Exploiting a Joint

CV Lab @ Yonsei University 35 Oct 26, 2022
Code for the ICCV 2021 Workshop paper: A Unified Efficient Pyramid Transformer for Semantic Segmentation.

Unified-EPT Code for the ICCV 2021 Workshop paper: A Unified Efficient Pyramid Transformer for Semantic Segmentation. Installation Linux, CUDA>=10.0,

null 29 Aug 23, 2022