Super-BPD: Super Boundary-to-Pixel Direction for Fast Image Segmentation (CVPR 2020)

Overview

Super-BPD for Fast Image Segmentation (CVPR 2020)

Introduction

We propose direction-based super-BPD, an alternative to superpixel, for fast generic image segmentation, achieving state-of-the-art real-time result.

Citation

Please cite the related works in your publications if it helps your research:


@InProceedings{Wan_2020_CVPR,
author = {Wan, Jianqiang and Liu, Yang and Wei, Donglai and Bai, Xiang and Xu, Yongchao},
title = {Super-BPD: Super Boundary-to-Pixel Direction for Fast Image Segmentation},
booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}

Prerequisite

  • pytorch >= 1.3.0
  • g++ 7

Dataset

Testing

  • Compile cuda code for post-process.
cd post_process
python setup.py install
  • Download the pre-trained PascalContext model and put it in the saved folder.

  • Test the model and results will be saved in the test_pred_flux/PascalContext folder.

  • SEISM is used for evaluation of image segmentation.

Training

python train.py --dataset PascalContext
Comments
  • clarification on norm loss calculation; possible bug?

    clarification on norm loss calculation; possible bug?

    when i look at the image at https://github.com/JianqiangWan/Super-BPD/blob/master/post_process/2009_004607.png

    shown here image

    the norm_pred seems to decrease to blue (< 0.5) in the center of the cat's face (farther from the boundary). this also happen for all midpoints from the boundary of the cat. this is extremely different than the norm_gt

    when I look at the code in

    https://github.com/JianqiangWan/Super-BPD/blob/master/vis_flux.py#L45

    that seems like the correct calculation for the norm

    I've run this on a few other examples

    image

    and a similar thing seems to happen.

    this led me to go investigate the implementation of the loss

    If I'm understanding the loss as defined in the paper

    image

    that means norm_loss should be pred_flux - gt_flux like in https://github.com/JianqiangWan/Super-BPD/blob/master/train.py#L42

    norm_loss = weight_matrix * (pred_flux - gt_flux)**2
    

    however, this happens after https://github.com/JianqiangWan/Super-BPD/blob/master/train.py#L39. which, I believe, is incorrect

    I believe that L39 needs to happen after L42. otherwise, the norm_loss as-is is actually training the norm values to be angle values.

    This makes sense as if we look at the norm_pred outputs, they look more similar to the norm_angle outputs than they should be.

    HOWEVER, I could be completely misunderstanding the norm_loss term, so please let me know if I am! 🤞

    opened by rllin 6
  • How to test a single RGB image?

    How to test a single RGB image?

    Thank you very much for your work. If I want to test an RGB image, can I get a semantic segmentation graph through the model input picture, and I want clear visualization results, can I get a more precise semantic segmentation graph?

    opened by carfei 1
  • Failed to compile the BPD module with errors under both Pytorch1.2 & Pytorch1.5

    Failed to compile the BPD module with errors under both Pytorch1.2 & Pytorch1.5

    Hi, Really nice work!

    We have tried to compile the BPD module in the post_process folder and got different error information based on Pytorch1.2 or Pytorch1.5 as below.

    • Pytorch1.2 errors:

    image

    • Pytorch1.5 errors: image

    Thanks!

    opened by PkuRainBow 1
  • bpd_cuda找不着

    bpd_cuda找不着

    我按照步骤安装bpd_cuda 但是打不开。 运行时显示没有模块 安装结果 Windows PowerShell 版权所有(C) Microsoft Corporation。保留所有权利。

    安装最新的 PowerShell,了解新功能和改进!https://aka.ms/PSWindows

    PS C:\Users\zznZZ> E E : 无法将“E”项识别为 cmdlet、函数、脚本文件或可运行程序的名称。请检查名称的拼写,如果包括路径,请确保路径正确,然后 再试一次。 所在位置 行:1 字符: 1

    • E
    • ~
      • CategoryInfo : ObjectNotFound: (E:String) [], CommandNotFoundException
      • FullyQualifiedErrorId : CommandNotFoundException

    PS C:\Users\zznZZ> PS C:\Users\zznZZ> E: PS E:> cdE:\Documents\Jnotebook专用\Super-BPD-master\post_process cdE:\Documents\Jnotebook专用\Super-BPD-master\post_process : 无法将“cdE:\Documents\Jnotebook专用\Super-BPD-master\post _process”项识别为 cmdlet、函数、脚本文件或可运行程序的名称。请检查名称的拼写,如果包括路径,请确保路径正确,然后再试一 次。 所在位置 行:1 字符: 1

    • cdE:\Documents\Jnotebook专用\Super-BPD-master\post_process
    •   + CategoryInfo          : ObjectNotFound: (cdE:\Documents\...er\post_process:String) [], CommandNotFoundException
        + FullyQualifiedErrorId : CommandNotFoundException
      
      

    PS E:> cd \Documents PS E:\Documents> cd Jnotebook专用 PS E:\Documents\Jnotebook专用> cd Super-BPD-master\post_process PS E:\Documents\Jnotebook专用\Super-BPD-master\post_process> python setup.py install running install running bdist_egg running egg_info writing bpd_cuda.egg-info\PKG-INFO writing dependency_links to bpd_cuda.egg-info\dependency_links.txt writing top-level names to bpd_cuda.egg-info\top_level.txt C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\utils\cpp_extension.py:387: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend. warnings.warn(msg.format('we could not find ninja.')) reading manifest file 'bpd_cuda.egg-info\SOURCES.txt' writing manifest file 'bpd_cuda.egg-info\SOURCES.txt' installing library code to build\bdist.win-amd64\egg running install_lib running build_ext C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\utils\cpp_extension.py:322: UserWarning: Error checking compiler version for cl: [WinError 2] 系统找不到指定的文件。 warnings.warn(f'Error checking compiler version for {compiler}: {error}') building 'bpd_cuda' extension C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -IC:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include -IC:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\TH -IC:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -IC:\Users\zznZZ\AppData\Local\Programs\Python\Python38\include -IC:\Users\zznZZ\AppData\Local\Programs\Python\Python38\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\cppwinrt" /EHsc /Tpbpd_cuda.cpp /Fobuild\temp.win-amd64-3.8\Release\bpd_cuda.obj /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=bpd_cuda -D_GLIBCXX_USE_CXX11_ABI=0 bpd_cuda.cpp C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/macros/Macros.h(142): warning C4067: 预处理器指令后有意外标记 - 应输入换行符 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(198): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除” with [ T=c10::impl::InlineDeviceGuardc10::impl::VirtualGuardImpl ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(397): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用 with [ T=c10::impl::InlineDeviceGuardc10::impl::VirtualGuardImpl ] C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\include\type_traits(630): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用 with [ T=c10::impl::InlineDeviceGuardc10::impl::VirtualGuardImpl ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 类 模板 实例化“std::is_copy_constructible<c10::trivially_copyable_optimization_optional_base>”的引用 with [ T=c10::impl::InlineDeviceGuardc10::impl::VirtualGuardImpl ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBase<c10::impl::InlineDeviceGuardc10::impl::VirtualGuardImpl>”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/core/impl/InlineDeviceGuard.h(427): note: 查看对正在编译的 类 模板 实例化“c10::optional<c10::impl::InlineDeviceGuardc10::impl::VirtualGuardImpl>”的 引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/core/DeviceGuard.h(178): note: 查看对正在编译的 类 模板 实例化“c10::impl::InlineOptionalDeviceGuardc10::impl::VirtualGuardImpl”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(432): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除” with [ T=c10::impl::InlineDeviceGuardc10::impl::VirtualGuardImpl ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(198): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除” with [ T=at::TensorBase ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(397): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用 with [ T=at::TensorBase ] C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\include\type_traits(630): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用 with [ T=at::TensorBase ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 类 模板 实例化“std::is_copy_constructible<c10::trivially_copyable_optimization_optional_base>”的引用 with [ T=at::TensorBase ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBaseat::TensorBase”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\ATen/core/TensorBase.h(933): note: 查看对正在编译的 类 模板 实例化“c10::optionalat::TensorBase”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(432): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除” with [ T=at::TensorBase ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(198): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除” with [ T=at::Tensor ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(397): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用 with [ T=at::Tensor ] C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\include\type_traits(630): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用 with [ T=at::Tensor ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 类 模板 实例化“std::is_copy_constructible<c10::trivially_copyable_optimization_optional_base>”的引用 with [ T=at::Tensor ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBaseat::Tensor”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\ATen/core/TensorBody.h(502): note: 查看对正在编译的 类 模板 实例化“c10::optionalat::Tensor”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(432): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除” with [ T=at::Tensor ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(198): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除” with [ T=at::Generator ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(397): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用 with [ T=at::Generator ] C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\include\type_traits(630): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用 with [ T=at::Generator ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 类 模板 实例化“std::is_copy_constructible<c10::trivially_copyable_optimization_optional_base>”的引用 with [ T=at::Generator ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBaseat::Generator”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\ATen/core/TensorBody.h(576): note: 查看对正在编译的 类 模板 实例化“c10::optionalat::Generator”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(432): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除” with [ T=at::Generator ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(198): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除” with [ T=at::DimVector ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(397): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用 with [ T=at::DimVector ] C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\include\type_traits(630): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用 with [ T=at::DimVector ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 类 模板 实例化“std::is_copy_constructible<c10::trivially_copyable_optimization_optional_base>”的引用 with [ T=at::DimVector ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBaseat::DimVector”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\ATen/TensorIterator.h(766): note: 查看对正在编译的 类 模板 实例化“c10::optionalat::DimVector”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(432): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除” with [ T=at::DimVector ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(198): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除” with [ T=std::string ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(397): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用 with [ T=std::string ] C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\include\type_traits(630): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用 with [ T=std::string ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 类 模板 实例化“std::is_copy_constructible<c10::trivially_copyable_optimization_optional_base>”的引用 with [ T=std::string ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBasestd::string”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\ATen/core/jit_type_base.h(443): note: 查看对正在编译的 类 模板 实例化“c10::optionalstd::string”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(432): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除” with [ T=std::string ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(198): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除” with [ T=c10::QualifiedName ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(397): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用 with [ T=c10::QualifiedName ] C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\include\type_traits(630): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用 with [ T=c10::QualifiedName ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 类 模板 实例化“std::is_copy_constructible<c10::trivially_copyable_optimization_optional_base>”的引用 with [ T=c10::QualifiedName ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBasec10::QualifiedName”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\ATen/core/jit_type_base.h(691): note: 查看对正在编译的 类 模板 实例化“c10::optionalc10::QualifiedName”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(432): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除” with [ T=c10::QualifiedName ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(198): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除” with [ T=std::shared_ptrtorch::jit::CompilationUnit ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(397): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用 with [ T=std::shared_ptrtorch::jit::CompilationUnit ] C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\include\type_traits(630): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用 with [ T=std::shared_ptrtorch::jit::CompilationUnit ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 类 模板 实例化“std::is_copy_constructible<c10::trivially_copyable_optimization_optional_base>”的引用 with [ T=std::shared_ptrtorch::jit::CompilationUnit ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBase<std::shared_ptrtorch::jit::CompilationUnit>”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\ATen/core/ivalue.h(1241): note: 查看对正在编译的 类 模板 实例化“c10::optional<std::shared_ptrtorch::jit::CompilationUnit>”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(432): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除” with [ T=std::shared_ptrtorch::jit::CompilationUnit ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(198): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除” with [ T=std::weak_ptrtorch::jit::CompilationUnit ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(397): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用 with [ T=std::weak_ptrtorch::jit::CompilationUnit ] C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\include\type_traits(630): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用 with [ T=std::weak_ptrtorch::jit::CompilationUnit ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 类 模板 实例化“std::is_copy_constructible<c10::trivially_copyable_optimization_optional_base>”的引用 with [ T=std::weak_ptrtorch::jit::CompilationUnit ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBase<std::weak_ptrtorch::jit::CompilationUnit>”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\ATen/core/ivalue.h(1242): note: 查看对正在编译的 类 模板 实例化“c10::optional<std::weak_ptrtorch::jit::CompilationUnit>”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(432): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除” with [ T=std::weak_ptrtorch::jit::CompilationUnit ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(198): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除” with [ T=std::vector<c10::ShapeSymbol,std::allocatorc10::ShapeSymbol> ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(397): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用 with [ T=std::vector<c10::ShapeSymbol,std::allocatorc10::ShapeSymbol> ] C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\include\type_traits(630): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用 with [ T=std::vector<c10::ShapeSymbol,std::allocatorc10::ShapeSymbol> ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 类 模板 实例化“std::is_copy_constructible<c10::trivially_copyable_optimization_optional_base>”的引用 with [ T=std::vector<c10::ShapeSymbol,std::allocatorc10::ShapeSymbol> ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBase<std::vector<c10::ShapeSymbol,std::allocatorc10::ShapeSymbol>>”的引用C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\ATen/core/jit_type.h(460): note: 查看对正在编译的 类 模板 实例化“c10::optional<std::vector<c10::ShapeSymbol,std::allocatorc10::ShapeSymbol>>”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(432): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除” with [ T=std::vector<c10::ShapeSymbol,std::allocatorc10::ShapeSymbol> ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(198): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除” with [ T=std::vector<c10::optionalc10::Stride,std::allocator<c10::optionalc10::Stride>> ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(397): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用 with [ T=std::vector<c10::optionalc10::Stride,std::allocator<c10::optionalc10::Stride>> ] C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\include\type_traits(630): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用 with [ T=std::vector<c10::optionalc10::Stride,std::allocator<c10::optionalc10::Stride>> ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 类 模板 实例化“std::is_copy_constructible<c10::trivially_copyable_optimization_optional_base>”的引用 with [ T=std::vector<c10::optionalc10::Stride,std::allocator<c10::optionalc10::Stride>> ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBase<std::vector<c10::optionalc10::Stride,std::allocator<c10::optionalc10::Stride>>>”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\ATen/core/jit_type.h(545): note: 查看对正在编译的 类 模板 实例化“c10::optional<std::vector<c10::optionalc10::Stride,std::allocator<c10::optionalc10::Stride>>>”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\ATen/core/jit_type.h(800): note: 查看对正在编译的 类 模板 实例化“c10::VaryingShapec10::Stride”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(432): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除” with [ T=std::vector<c10::optionalc10::Stride,std::allocator<c10::optionalc10::Stride>> ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(198): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除” with [ T=std::vector<c10::optional<int64_t>,std::allocator<c10::optional<int64_t>>> ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(397): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用 with [ T=std::vector<c10::optional<int64_t>,std::allocator<c10::optional<int64_t>>> ] C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\include\type_traits(630): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用 with [ T=std::vector<c10::optional<int64_t>,std::allocator<c10::optional<int64_t>>> ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 类 模板 实例化“std::is_copy_constructible<c10::trivially_copyable_optimization_optional_base>”的引用 with [ T=std::vector<c10::optional<int64_t>,std::allocator<c10::optional<int64_t>>> ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(540): note: 查看对正在编译的 别名 模板 实例化“c10::OptionalBase<std::vector<c10::optional<int64_t>,std::allocator<c10::optional<int64_t>>>>”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\ATen/core/jit_type.h(545): note: 查看对正在编译的 类 模板 实例化“c10::optional<std::vector<c10::optional<int64_t>,std::allocator<c10::optional<int64_t>>>>”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\ATen/core/jit_type.h(591): note: 查看对正在编译的 类 模板 实例化“c10::VaryingShape<int64_t>”的引用 C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(432): warning C4624: “c10::trivially_copyable_optimization_optional_base”: 已将析构函数隐式定义为“已删除” with [ T=std::vector<c10::optional<int64_t>,std::allocator<c10::optional<int64_t>>> ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(198): warning C4624: “c10::constexpr_storage_t”: 已将析构函数隐式定义为“已删除” with [ T=std::vector<int64_t,std::allocator<int64_t>> ] C:\Users\zznZZ\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\include\c10/util/Optional.h(397): note: 查看对正在编译的 类 模板 实例化“c10::constexpr_storage_t”的引用 with [ T=std::vector<int64_t,std::allocator<int64_t>> ] C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29333\include\type_traits(630): note: 查看对正在编译的 类 模板 实例化“c10::trivially_copyable_optimization_optional_base”的引用 with [ T=std::vector<int64_t,std::allocator<int64_t>>

    opened by zhouzinimg 1
  • Could you explain the output data meaning?

    Could you explain the output data meaning?

    The point is that the model works fast but bpd_cuda.forward call takes a large amount of time so the question is: what output data is and can I not to call bpd_cuda.forward to get data about segments?

    Also is the network orientied for specific domain image content? Cause for my test images it results bad.

    Original image 716c971s-960

    Visualization grid result

    root root

    super_BPDs super_BPDs

    super_BPDs_before_dilation super_BPDs_before_dilation

    super_BPDs_after_dilation super_BPDs_after_dilation

    my code to inference single image:

    model = VGG16()
    model.load_state_dict(torch.load('/home/algernone/git_projects/Super-BPD/saved/PascalContext_400000.pth'))
    
    model.eval()
    model.cuda()
    
    
    image_path = '/home/algernone/test_imgs/716c971s-960.jpg'
    image = cv2.imread(image_path, 1)
    src_img = image.copy()
    height, width = image.shape[:2]
    image = image.astype(np.float32)
    image -= IMAGE_MEAN
    image = image.transpose(2, 0, 1)
    image = image[np.newaxis]
    image = torch.from_numpy(image)
    
    tik = time()
    pred_flux = model(image.cuda())
    flux = pred_flux.data[0, ...]
    
    vis_flux(src_img, flux)
    
    angles = torch.atan2(flux[1,...], flux[0,...]) 
    angles[angles < 0] += 2*math.pi 
    
    height, width = angles.shape 
    
    # unit: degree 
    # theta_a, theta_l, theta_s, S_o, 45, 116, 68, 5 
    results = bpd_cuda.forward(angles, height, width, 45, 116, 68, 5) 
    root_points, super_BPDs_before_dilation, super_BPDs_after_dilation, super_BPDs = results 
    
    
    root_points = root_points.cpu().numpy()
    super_BPDs_before_dilation = super_BPDs_before_dilation.cpu().numpy()
    super_BPDs_after_dilation = super_BPDs_after_dilation.cpu().numpy()
    super_BPDs = super_BPDs.cpu().numpy()
    
    cv2.imwrite('root.png', 255*(root_points > 0))
    cv2.imwrite('super_BPDs.png', label2color(super_BPDs))
    cv2.imwrite('super_BPDs_before_dilation.png', label2color(super_BPDs_before_dilation))
    cv2.imwrite('super_BPDs_after_dilation.png', label2color(super_BPDs_after_dilation))
    
    opened by chamecall 1
  • Why batchsize is 1 by default

    Why batchsize is 1 by default

    Thank you for your code! We know that generally larger batchsize is more beneficial to the result,I have seen that the batchsize of many open source code for split tasks is set to 1 by default, but why?Looking forward to your reply!

    opened by darknli 1
  • RuntimeError: CUDA error: invalid device function

    RuntimeError: CUDA error: invalid device function

    Thank you very much for your work.

    1. I compiled and installed the post_process program.(success)

    2. I want to run demo to reproduce the result but got error

      results = bpd_cuda.forward(angles, height, width, 45, 116, 68, 5) RuntimeError: CUDA error: invalid device function

    Here is my torch version torch 1.6.0 torchvision 0.7.0

    I run the program in V100 and had 4GPUs Thank you very much if you can reply.

    opened by zyybutter 2
Owner
Master Student of VLR Group, Hust
null
The implementation of ICASSP 2020 paper "Pixel-level self-paced learning for super-resolution"

Pixel-level Self-Paced Learning for Super-Resolution This is an official implementaion of the paper Pixel-level Self-Paced Learning for Super-Resoluti

Elon Lin 41 Dec 15, 2022
Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018

Learning Pixel-level Semantic Affinity with Image-level Supervision This code is deprecated. Please see https://github.com/jiwoon-ahn/irn instead. Int

Jiwoon Ahn 337 Dec 15, 2022
Implementation of CVPR 2020 Dual Super-Resolution Learning for Semantic Segmentation

Dual super-resolution learning for semantic segmentation 2021-01-02 Subpixel Update Happy new year! The 2020-12-29 update of SISR with subpixel conv p

Sam 79 Nov 24, 2022
Super-Fast-Adversarial-Training - A PyTorch Implementation code for developing super fast adversarial training

Super-Fast-Adversarial-Training This is a PyTorch Implementation code for develo

LBK 26 Dec 2, 2022
[CVPR'22] Weakly Supervised Semantic Segmentation by Pixel-to-Prototype Contrast

wseg Overview The Pytorch implementation of Weakly Supervised Semantic Segmentation by Pixel-to-Prototype Contrast. [arXiv] Though image-level weakly

Ye Du 96 Dec 30, 2022
Boundary-preserving Mask R-CNN (ECCV 2020)

BMaskR-CNN This code is developed on Detectron2 Boundary-preserving Mask R-CNN ECCV 2020 Tianheng Cheng, Xinggang Wang, Lichao Huang, Wenyu Liu Video

Hust Visual Learning Team 178 Nov 28, 2022
Exploring Cross-Image Pixel Contrast for Semantic Segmentation

Exploring Cross-Image Pixel Contrast for Semantic Segmentation Exploring Cross-Image Pixel Contrast for Semantic Segmentation, Wenguan Wang, Tianfei Z

Tianfei Zhou 510 Jan 2, 2023
Official respository for "Modeling Defocus-Disparity in Dual-Pixel Sensors", ICCP 2020

Official respository for "Modeling Defocus-Disparity in Dual-Pixel Sensors", ICCP 2020 BibTeX @INPROCEEDINGS{punnappurath2020modeling, author={Abhi

Abhijith Punnappurath 22 Oct 1, 2022
code for `Look Closer to Segment Better: Boundary Patch Refinement for Instance Segmentation`

Look Closer to Segment Better: Boundary Patch Refinement for Instance Segmentation (CVPR 2021) Introduction PBR is a conceptually simple yet effective

H.Chen 143 Jan 5, 2023
Generic Event Boundary Detection: A Benchmark for Event Segmentation

Generic Event Boundary Detection: A Benchmark for Event Segmentation We release our data annotation & baseline codes for detecting generic event bound

null 47 Nov 22, 2022
Code for Boundary-Aware Segmentation Network for Mobile and Web Applications

BASNet Boundary-Aware Segmentation Network for Mobile and Web Applications This repository contain implementation of BASNet in tensorflow/keras. comme

Hamid Ali 8 Nov 24, 2022
[AAAI-2021] Visual Boundary Knowledge Translation for Foreground Segmentation

Trans-Net Code for (Visual Boundary Knowledge Translation for Foreground Segmentation, AAAI2021). [https://ojs.aaai.org/index.php/AAAI/article/view/16

ZJU-VIPA 2 Mar 4, 2022
An official PyTorch Implementation of Boundary-aware Self-supervised Learning for Video Scene Segmentation (BaSSL)

An official PyTorch Implementation of Boundary-aware Self-supervised Learning for Video Scene Segmentation (BaSSL)

Kakao Brain 72 Dec 28, 2022
git《Investigating Loss Functions for Extreme Super-Resolution》(CVPR 2020) GitHub:

Investigating Loss Functions for Extreme Super-Resolution NTIRE 2020 Perceptual Extreme Super-Resolution Submission. Our method ranked first and secon

Sejong Yang 0 Oct 17, 2022
Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning, CVPR 2021

Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning By Zhenda Xie*, Yutong Lin*, Zheng Zhang, Yue Ca

Zhenda Xie 293 Dec 20, 2022
Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral

Good news! We release a clean version of PVNet: clean-pvnet, including how to train the PVNet on the custom dataset. Use PVNet with a detector. The tr

ZJU3DV 722 Dec 27, 2022
Per-Pixel Classification is Not All You Need for Semantic Segmentation

MaskFormer: Per-Pixel Classification is Not All You Need for Semantic Segmentation Bowen Cheng, Alexander G. Schwing, Alexander Kirillov [arXiv] [Proj

Facebook Research 1k Jan 8, 2023
Pytorch Implementation for NeurIPS (oral) paper: Pixel Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation

Pixel-Level Cycle Association This is the Pytorch implementation of our NeurIPS 2020 Oral paper Pixel-Level Cycle Association: A New Perspective for D

null 87 Oct 19, 2022
Pixel-wise segmentation on VOC2012 dataset using pytorch.

PiWiSe Pixel-wise segmentation on the VOC2012 dataset using pytorch. FCN SegNet PSPNet UNet RefineNet For a more complete implementation of segmentati

Bodo Kaiser 378 Dec 30, 2022