[CVPR2021] DoDNet: Learning to segment multi-organ and tumors from multiple partially labeled datasets

Related tags

Deep Learning DoDNet
Overview

DoDNet

This repo holds the pytorch implementation of DoDNet:

DoDNet: Learning to segment multi-organ and tumors from multiple partially labeled datasets. (https://arxiv.org/pdf/2011.10217.pdf)

Requirements

Python 3.7
PyTorch==1.4.0
Apex==0.1
batchgenerators

Usage

0. Installation

  • Clone this repo
git clone https://github.com/jianpengz/DoDNet.git
cd DoDNet

1. MOTS Dataset Preparation

Before starting, MOTS should be re-built from the serveral medical organ and tumor segmentation datasets

Partial-label task Data source
Liver data
Kidney data
Hepatic Vessel data
Pancreas data
Colon data
Lung data
Spleen data
  • Download and put these datasets in dataset/0123456/.
  • Re-spacing the data by python re_spacing.py, the re-spaced data will be saved in 0123456_spacing_same/.

The folder structure of dataset should be like

dataset/0123456_spacing_same/
├── 0Liver
|    └── imagesTr
|        ├── liver_0.nii.gz
|        ├── liver_1.nii.gz
|        ├── ...
|    └── labelsTr
|        ├── liver_0.nii.gz
|        ├── liver_1.nii.gz
|        ├── ...
├── 1Kidney
├── ...

2. Model

Pretrained model is available in checkpoint

3. Training

  • cd `a_DynConv/' and run
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port=$RANDOM train.py \
--train_list='list/MOTS/MOTS_train.txt' \
--snapshot_dir='snapshots/dodnet' \
--input_size='64,192,192' \
--batch_size=2 \
--num_gpus=2 \
--num_epochs=1000 \
--start_epoch=0 \
--learning_rate=1e-2 \
--num_classes=2 \
--num_workers=8 \
--weight_std=True \
--random_mirror=True \
--random_scale=True \
--FP16=False

4. Evaluation

CUDA_VISIBLE_DEVICES=0 python evaluate.py \
--val_list='list/MOTS/MOTS_test.txt' \
--reload_from_checkpoint=True \
--reload_path='snapshots/dodnet/MOTS_DynConv_checkpoint.pth' \
--save_path='outputs/' \
--input_size='64,192,192' \
--batch_size=1 \
--num_gpus=1 \
--num_workers=2

5. Post-processing

python postp.py --img_folder_path='outputs/dodnet/'

6. Citation

If this code is helpful for your study, please cite:

@inproceedings{zhang2021dodnet,
  title={DoDNet: Learning to segment multi-organ and tumors from multiple partially labeled datasets},
  author={Zhang, Jianpeng and Xie, Yutong and Xia, Yong and Shen, Chunhua},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  pages={},
  year={2021}
}

Contact

Jianpeng Zhang ([email protected])

Comments
  •  KiTS19

    KiTS19

    Hi, you really did an excelent job, and thanks for your sharing. I have already prepared all datasets except KiTS19, other datasets work well on your code, they all got right results. I followed the steps in KiTS19, but this result run by evaluate.py and postp.py is wrong. So, would you please tell me more details about how to get right KiTS19? Thanks!

    opened by ChiYe622 10
  • Data download

    Data download

    Hi. @jianpengz ,, Thx very much for your work. Currently, I just use this repro, and occur some problems when downloading the dataset as your instruction. 1.) I do not find any data related with Kidney dataset; 2) I just download Lits and found that are different with your data folder format in the lits/MOTS/MOTS_train/test.txt

    opened by wanglixilinx 10
  • Question About Training Process

    Question About Training Process

    Hello, thank you very much for the code you provided. I encountered some problems during training. During the training process, a dimensional abnormality error occurred at a certain epoch. What is the reason for this? torch.Size([1, 64, 32, 96, 96]) skip1 : shape torch.Size([1, 64, 32, 96, 95]) 16 22:01:46 WRN A exception occurred during Engine initialization, give up running process Traceback (most recent call last): File "train.py", line 253, in main() File "train.py", line 188, in main preds = model(images, task_ids) File "/home/test/anaconda3/envs/qy/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, **kwargs) File "/home/test/anaconda3/envs/qy/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/parallel/distributed.py", line 560, in forward result = self.module(*inputs, **kwargs) File "/home/test/anaconda3/envs/qy/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, **kwargs) File "/data/qy/code/DoDNet/a_DynConv/unet3D_DynConv882.py", line 230, in forward x = x + skip1 RuntimeError: The size of tensor a (96) must match the size of tensor b (95) at non-singleton dimension 4

    opened by QianSiWang1 6
  • cannot import name 'parse_devices' from 'utils.pyt_utils'

    cannot import name 'parse_devices' from 'utils.pyt_utils'

    it's a great honor for me to read your paper, when i run your code i find it shows 'cannot import name 'parse_devices' from 'utils.pyt_utils'', i checked the pyt_utils.py but didn't find the defination of parse_devices. i don't how to solve it ? thanks for your help. image

    opened by peterlv666 4
  • inconsistent data numbers on 2HepaticVessel

    inconsistent data numbers on 2HepaticVessel

    Hi, thank you a lot for your contribution. I am trying to reproduce the results but find that on dataset 2Hippocampus, you have 242 + 61 = 303 images but in imagesTr folder under Task04_Hippocampus downloaded from MSD, I can only find 260 in total. How do you get labeled data from cases in imagesTs?

    opened by Huiimin5 2
  • 运行代码过程时的疑问

    运行代码过程时的疑问

    张老师好,您给的代码中a_DynConv文件夹下没有train_Dynconv.py文件,是需要运行train.py吗?python train.py后出现

    Traceback (most recent call last):
      File "train.py", line 30, in <module>
        from engine import Engine
      File "../engine.py", line 9, in <module>
        from utils.logger import get_logger
    Traceback (most recent call last):
    ModuleNotFoundError:   File "train.py", line 30, in <module>
    No module named 'utils.logger'
        from engine import Engine
      File "../engine.py", line 9, in <module>
        from utils.logger import get_logger
    ModuleNotFoundError: No module named 'utils.logger'
    

    但是pip install utils.logger 又没有找到该module

    ERROR: Could not find a version that satisfies the requirement utils.logger
    ERROR: No matching distribution found for utils.logger
    

    我想着这个utils.logger是不是您少公开的文件?

    期待您的回复,谢谢

    opened by kaibeat 2
  • Liver and Kidney Preprocessing

    Liver and Kidney Preprocessing

    The link that's given for the 0Liver data does not have the same format as the rest of the data with the imagesTr/labelsTr/imagesTs structure the rest of the datasets have. In the train.txt file, the 0Liver is given that structure. Where can I find the data for 0Liver that has this structuring?

    Also, for the 1Kidney preprocessing, should line 28 on respacing.py be dirs1 and not i_dirs1 so that the preprocessing for 1Kidey goes through this loop because the i_dirs1 is each specific case? Also, when going through this loop the data does not save in the spacing folder with origin as a sub folder. Is this an issue, or can I just delete the origin folder from the train.txt file?

    opened by ShafinH 1
  • How to directly run train.py ?

    How to directly run train.py ?

    @jianpengz i can run your code use your command in pycharm terminal ,but when i run the train.py directly it shows 'RuntimeError:' as the picture shows,can you tell me how to solve it ?thanks for your time and kindness. image image

    opened by peterlv666 1
  • Broken Pipe Training Error

    Broken Pipe Training Error

    Error while training on Hepatic Vessels and Pancreas:

    Traceback (most recent call last): File "train.py", line 250, in <module> main() File "train.py", line 177, in main for iter, batch in enumerate(trainloader): File "C:\Users\shafi\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 359, in __iter__ return self._get_iterator() File "C:\Users\shafi\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 305, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "C:\Users\shafi\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 918, in __init__ w.start() File "C:\Users\shafi\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self) File "C:\Users\shafi\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\shafi\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\Users\shafi\AppData\Local\Programs\Python\Python37\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__ reduction.dump(process_obj, to_child) File "C:\Users\shafi\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) BrokenPipeError: [Errno 32] Broken pipe

    opened by ShafinH 0
  • The training error

    The training error

    The MSD Liver dataset was used for training: DoDNet/a_DynConv/MOTSDataset.py", line 471, in my_collate data_dict = tr_transforms(**data_dict) TypeError: call() got an unexpected keyword argument 'image'

    opened by WSake 0
  • Question About re_spacing.py

    Question About re_spacing.py

    Hello, thank you very much for the code. I have a question about re_spacing.py. The code does not process the 0liver data set. Does it mean that there is no need to perform additional processing on the downloaded data, just modify the file name directly.

    opened by QianSiWang1 0
  • Help,Please!

    Help,Please!

    Traceback (most recent call last): File "/home/shiya.xu/papers/DoDNet/a_DynConv/train.py", line 250, in main() File "/home/shiya.xu/papers/DoDNet/a_DynConv/train.py", line 185, in main preds = model(images, task_ids) File "/home/shiya.xu/anaconda3/envs/pyy/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/shiya.xu/anaconda3/envs/pyy/lib/python3.10/site-packages/apex/parallel/distributed.py", line 564, in forward result = self.module(*inputs, **kwargs) File "/home/shiya.xu/anaconda3/envs/pyy/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/shiya.xu/papers/DoDNet/a_DynConv/unet3D_DynConv882.py", line 224, in forward x = x + skip1 RuntimeError: The size of tensor a (96) must match the size of tensor b (95) at non-singleton dimension 3 WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 26535 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 26536) of binary: /home/shiya.xu/anaconda3/envs/pyy/bin/python

    opened by smallkaka 0
  • Results seem different

    Results seem different

    Hi,

    I downloaded the chekcpoint and did the inference but found that the Dice of Kidney (and Kidney's tumor) is close to 0! Like the following figure shows. Did anyone meet the same issue? Can anyone help to explain the reason? Thanks a lot.

    image

    opened by Jingnan-Jia 0
  • Error while training with checkpoint

    Error while training with checkpoint

    File "train.py", line 266, in <module> main() File "train.py", line 151, in main args.reload_path, map_location=torch.device('cpu'))) File "C:\Users\shafi\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 1407, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for unet3D: Missing key(s) in state_dict: "conv1.weight", "layer0.0.gn1.weight", "layer0.0.gn1.bias", "layer0.0.conv1.weight", "layer0.0.gn2.weight", "layer0.0.gn2.bias", "layer0.0.conv2.weight", "layer1.0.gn1.weight", "layer1.0.gn1.bias", "layer1.0.conv1.weight", "layer1.0.gn2.weight", "layer1.0.gn2.bias", "layer1.0.conv2.weight", "layer1.0.downsample.0.weight", "layer1.0.downsample.0.bias", "layer1.0.downsample.2.weight", "layer1.1.gn1.weight", "layer1.1.gn1.bias", "layer1.1.conv1.weight", "layer1.1.gn2.weight", "layer1.1.gn2.bias", "layer1.1.conv2.weight", "layer2.0.gn1.weight", "layer2.0.gn1.bias", "layer2.0.conv1.weight", "layer2.0.gn2.weight", "layer2.0.gn2.bias", "layer2.0.conv2.weight", "layer2.0.downsample.0.weight", "layer2.0.downsample.0.bias", "layer2.0.downsample.2.weight", "layer2.1.gn1.weight", "layer2.1.gn1.bias", "layer2.1.conv1.weight", "layer2.1.gn2.weight", "layer2.1.gn2.bias", "layer2.1.conv2.weight", "layer3.0.gn1.weight", "layer3.0.gn1.bias", "layer3.0.conv1.weight", "layer3.0.gn2.weight", "layer3.0.gn2.bias", "layer3.0.conv2.weight", "layer3.0.downsample.0.weight", "layer3.0.downsample.0.bias", "layer3.0.downsample.2.weight", "layer3.1.gn1.weight", "layer3.1.gn1.bias", "layer3.1.conv1.weight", "layer3.1.gn2.weight", "layer3.1.gn2.bias", "layer3.1.conv2.weight", "layer4.0.gn1.weight", "layer4.0.gn1.bias", "layer4.0.conv1.weight", "layer4.0.gn2.weight", "layer4.0.gn2.bias", "layer4.0.conv2.weight", "layer4.0.downsample.0.weight", "layer4.0.downsample.0.bias", "layer4.0.downsample.2.weight", "layer4.1.gn1.weight", "layer4.1.gn1.bias", "layer4.1.conv1.weight", "layer4.1.gn2.weight", "layer4.1.gn2.bias", "layer4.1.conv2.weight", "fusionConv.0.weight", "fusionConv.0.bias", "fusionConv.2.weight", "x8_resb.0.gn1.weight", "x8_resb.0.gn1.bias", "x8_resb.0.conv1.weight", "x8_resb.0.gn2.weight", "x8_resb.0.gn2.bias", "x8_resb.0.conv2.weight", "x8_resb.0.downsample.0.weight", "x8_resb.0.downsample.0.bias", "x8_resb.0.downsample.2.weight", "x4_resb.0.gn1.weight", "x4_resb.0.gn1.bias", "x4_resb.0.conv1.weight", "x4_resb.0.gn2.weight", "x4_resb.0.gn2.bias", "x4_resb.0.conv2.weight", "x4_resb.0.downsample.0.weight", "x4_resb.0.downsample.0.bias", "x4_resb.0.downsample.2.weight", "x2_resb.0.gn1.weight", "x2_resb.0.gn1.bias", "x2_resb.0.conv1.weight", "x2_resb.0.gn2.weight", "x2_resb.0.gn2.bias", "x2_resb.0.conv2.weight", "x2_resb.0.downsample.0.weight", "x2_resb.0.downsample.0.bias", "x2_resb.0.downsample.2.weight", "x1_resb.0.gn1.weight", "x1_resb.0.gn1.bias", "x1_resb.0.conv1.weight", "x1_resb.0.gn2.weight", "x1_resb.0.gn2.bias", "x1_resb.0.conv2.weight", "precls_conv.0.weight", "precls_conv.0.bias", "precls_conv.2.weight", "precls_conv.2.bias", "GAP.0.weight", "GAP.0.bias", "controller.weight", "controller.bias". Unexpected key(s) in state_dict: "module.conv1.weight", "module.layer0.0.gn1.weight", "module.layer0.0.gn1.bias", "module.layer0.0.conv1.weight", "module.layer0.0.gn2.weight", "module.layer0.0.gn2.bias", "module.layer0.0.conv2.weight", "module.layer1.0.gn1.weight", "module.layer1.0.gn1.bias", "module.layer1.0.conv1.weight", "module.layer1.0.gn2.weight", "module.layer1.0.gn2.bias", "module.layer1.0.conv2.weight", "module.layer1.0.downsample.0.weight", "module.layer1.0.downsample.0.bias", "module.layer1.0.downsample.2.weight", "module.layer1.1.gn1.weight", "module.layer1.1.gn1.bias", "module.layer1.1.conv1.weight", "module.layer1.1.gn2.weight", "module.layer1.1.gn2.bias", "module.layer1.1.conv2.weight", "module.layer2.0.gn1.weight", "module.layer2.0.gn1.bias", "module.layer2.0.conv1.weight", "module.layer2.0.gn2.weight", "module.layer2.0.gn2.bias", "module.layer2.0.conv2.weight", "module.layer2.0.downsample.0.weight", "module.layer2.0.downsample.0.bias", "module.layer2.0.downsample.2.weight", "module.layer2.1.gn1.weight", "module.layer2.1.gn1.bias", "module.layer2.1.conv1.weight", "module.layer2.1.gn2.weight", "module.layer2.1.gn2.bias", "module.layer2.1.conv2.weight", "module.layer3.0.gn1.weight", "module.layer3.0.gn1.bias", "module.layer3.0.conv1.weight", "module.layer3.0.gn2.weight", "module.layer3.0.gn2.bias", "module.layer3.0.conv2.weight", "module.layer3.0.downsample.0.weight", "module.layer3.0.downsample.0.bias", "module.layer3.0.downsample.2.weight", "module.layer3.1.gn1.weight", "module.layer3.1.gn1.bias", "module.layer3.1.conv1.weight", "module.layer3.1.gn2.weight", "module.layer3.1.gn2.bias", "module.layer3.1.conv2.weight", "module.layer4.0.gn1.weight", "module.layer4.0.gn1.bias", "module.layer4.0.conv1.weight", "module.layer4.0.gn2.weight", "module.layer4.0.gn2.bias", "module.layer4.0.conv2.weight", "module.layer4.0.downsample.0.weight", "module.layer4.0.downsample.0.bias", "module.layer4.0.downsample.2.weight", "module.layer4.1.gn1.weight", "module.layer4.1.gn1.bias", "module.layer4.1.conv1.weight", "module.layer4.1.gn2.weight", "module.layer4.1.gn2.bias", "module.layer4.1.conv2.weight", "module.fusionConv.0.weight", "module.fusionConv.0.bias", "module.fusionConv.2.weight", "module.x8_resb.0.gn1.weight", "module.x8_resb.0.gn1.bias", "module.x8_resb.0.conv1.weight", "module.x8_resb.0.gn2.weight", "module.x8_resb.0.gn2.bias", "module.x8_resb.0.conv2.weight", "module.x8_resb.0.downsample.0.weight", "module.x8_resb.0.downsample.0.bias", "module.x8_resb.0.downsample.2.weight", "module.x4_resb.0.gn1.weight", "module.x4_resb.0.gn1.bias", "module.x4_resb.0.conv1.weight", "module.x4_resb.0.gn2.weight", "module.x4_resb.0.gn2.bias", "module.x4_resb.0.conv2.weight", "module.x4_resb.0.downsample.0.weight", "module.x4_resb.0.downsample.0.bias", "module.x4_resb.0.downsample.2.weight", "module.x2_resb.0.gn1.weight", "module.x2_resb.0.gn1.bias", "module.x2_resb.0.conv1.weight", "module.x2_resb.0.gn2.weight", "module.x2_resb.0.gn2.bias", "module.x2_resb.0.conv2.weight", "module.x2_resb.0.downsample.0.weight", "module.x2_resb.0.downsample.0.bias", "module.x2_resb.0.downsample.2.weight", "module.x1_resb.0.gn1.weight", "module.x1_resb.0.gn1.bias", "module.x1_resb.0.conv1.weight", "module.x1_resb.0.gn2.weight", "module.x1_resb.0.gn2.bias", "module.x1_resb.0.conv2.weight", "module.precls_conv.0.weight", "module.precls_conv.0.bias", "module.precls_conv.2.weight", "module.precls_conv.2.bias", "module.GAP.0.weight", "module.GAP.0.bias", "module.controller.weight", "module.controller.bias".

    opened by ShafinH 2
  • executable for KiTS and MSD downloads

    executable for KiTS and MSD downloads

    A shell script to run to download all of the MOTS data gdown is a requirement (can be installed with pip install gdown) takes target_path as an argument to download the data do the following: chmod +x load_mots.sh ./load_mots.sh target_path

    opened by arinaruck 0
Owner
null
Named Entity Recognition with Small Strongly Labeled and Large Weakly Labeled Data

Named Entity Recognition with Small Strongly Labeled and Large Weakly Labeled Data arXiv This is the code base for weakly supervised NER. We provide a

Amazon 92 Jan 4, 2023
A pure PyTorch implementation of the loss described in "Online Segment to Segment Neural Transduction"

ssnt-loss ℹ️ This is a WIP project. the implementation is still being tested. A pure PyTorch implementation of the loss described in "Online Segment t

張致強 1 Feb 9, 2022
Organseg dags - The repository contains the codebase for multi-organ segmentation with directed acyclic graphs (DAGs) in CT.

Organseg dags - The repository contains the codebase for multi-organ segmentation with directed acyclic graphs (DAGs) in CT.

yzf 1 Jun 12, 2022
COD-Rank-Localize-and-Segment (CVPR2021)

COD-Rank-Localize-and-Segment (CVPR2021) Simultaneously Localize, Segment and Rank the Camouflaged Objects Full camouflage fixation training dataset i

JingZhang 52 Dec 20, 2022
The toolkit to generate auto labeled datasets

Ozeu Ozeu is the toolkit to autolabal dataset for instance segmentation. You can generate datasets labaled with segmentation mask and bounding box fro

Xiong Jie 28 Mar 28, 2022
Identify the emotion of multiple speakers in an Audio Segment

MevonAI - Speech Emotion Recognition Identify the emotion of multiple speakers in a Audio Segment Report Bug · Request Feature Try the Demo Here Table

Suyash More 110 Dec 3, 2022
CowHerd is a partially-observed reinforcement learning environment

CowHerd is a partially-observed reinforcement learning environment, where the player walks around an area and is rewarded for milking cows. The cows try to escape and the player can place fences to help capture them. The implementation of CowHerd is based on the Crafter environment.

Danijar Hafner 6 Mar 6, 2022
PyTorch implementation for Partially View-aligned Representation Learning with Noise-robust Contrastive Loss (CVPR 2021)

2021-CVPR-MvCLN This repo contains the code and data of the following paper accepted by CVPR 2021 Partially View-aligned Representation Learning with

XLearning Group 33 Nov 1, 2022
Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal, multi-exposure and multi-focus image fusion.

U2Fusion Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal (VIS-IR, medical), multi

Han Xu 129 Dec 11, 2022
This is the repository for CVPR2021 Dynamic Metric Learning: Towards a Scalable Metric Space to Accommodate Multiple Semantic Scales

Intro This is the repository for CVPR2021 Dynamic Metric Learning: Towards a Scalable Metric Space to Accommodate Multiple Semantic Scales Vehicle Sam

null 39 Jul 21, 2022
The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.

Published by SpaceML • About SpaceML • Quick Colab Example Self-Supervised Learner The Self-Supervised Learner can be used to train a classifier with

SpaceML 92 Nov 30, 2022
Official implementation of "Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets" (CVPR2021)

Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets This is the official implementation of "Towards Good Pract

Sanja Fidler's Lab 52 Nov 22, 2022
DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort

DatasetGAN This is the official code and data release for: DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort Yuxuan Zhang*, Huan Li

null 302 Jan 5, 2023
Implementation for "Domain-Specific Bias Filtering for Single Labeled Domain Generalization"

DSBF Introduction This repository contains the implementation code for paper: Domain-Specific Bias Filtering for Single Labeled Domain Generalization

ScottYuan 7 Jan 5, 2023
Deep Learning Datasets Maker is a QGIS plugin to make datasets creation easier for raster and vector data.

Deep Learning Dataset Maker Deep Learning Datasets Maker is a QGIS plugin to make datasets creation easier for raster and vector data. How to use Down

deepbands 25 Dec 15, 2022
Cl datasets - PyTorch image dataloaders and utility functions to load datasets for supervised continual learning

Continual learning datasets Introduction This repository contains PyTorch image

berjaoui 5 Aug 28, 2022
Pytorch implementation of paper "Learning Co-segmentation by Segment Swapping for Retrieval and Discovery"

SegSwap Pytorch implementation of paper "Learning Co-segmentation by Segment Swapping for Retrieval and Discovery" [PDF] [Project page] If our project

xshen 41 Dec 10, 2022
Code for "Learning to Segment Rigid Motions from Two Frames".

rigidmask Code for "Learning to Segment Rigid Motions from Two Frames". ** This is a partial release with inference and evaluation code.

Gengshan Yang 157 Nov 21, 2022
【ACMMM 2021】DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning

DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning (ACMMM 2021) Overview We release the code of the DSANet (Dynamic S

Wenhao Wu 46 Dec 27, 2022