A PyTorch implementation of PointRend: Image Segmentation as Rendering

Overview

PointRend

A PyTorch implementation of PointRend: Image Segmentation as Rendering

title

[arxiv] [Official Implementation: Detectron2]


This repo for Only Semantic Segmentation on the PascalVOC dataset.

Many details differ from the paper for feasibilty check.


Reproduce Fig 5.

Sampled Points showing from different strategies on A Dog image.

See test_point_sampling.ipynb

Original Figure

fig5

Reference : Pytorch Deeplab Tutorial


How to use:

First, fix data path in default.yaml

Multi GPU Training See details in Single GPU Training

➜ python3 -m torch.distributed.launch --nproc_per_node={your_gpus} main.py -h

Sinle GPU Training

➜ python3 main.py -h
usage: main.py [-h] config save

PyTorch Object Detection Training

positional arguments:
  config      It must be config/*.yaml
  save        Save path in out directory

optional arguments:
  -h, --help  show this help message and exit

e.g.)

python3 main.py config/default.yaml test_codes

You might also like...
Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)
Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)

Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021) An efficient PyTorch library for Point Cloud Completion.

Rendering Point Clouds with Compute Shaders
Rendering Point Clouds with Compute Shaders

Compute Shader Based Point Cloud Rendering This repository contains the source code to our techreport: Rendering Point Clouds with Compute Shaders and

Official repo for AutoInt: Automatic Integration for Fast Neural Volume Rendering in CVPR 2021
Official repo for AutoInt: Automatic Integration for Fast Neural Volume Rendering in CVPR 2021

AutoInt: Automatic Integration for Fast Neural Volume Rendering CVPR 2021 Project Page | Video | Paper PyTorch implementation of automatic integration

Code for PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Relighting and Material Editing

PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Relighting and Material Editing CVPR 2021. Project page: https://kai-46.github.io/

This repository contains the source code for the paper
This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks",

DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks Project Page | Video | Presentation | Paper | Data L

[CVPR2021] De-rendering the World's Revolutionary Artefacts

De-rendering the World's Revolutionary Artefacts Project Page | Video | Paper In CVPR 2021 Shangzhe Wu1,4, Ameesh Makadia4, Jiajun Wu2, Noah Snavely4,

A curated list of neural rendering resources.

Awesome-of-Neural-Rendering A curated list of neural rendering and related resources. Please feel free to pull requests or open an issue to add papers

 GRF: Learning a General Radiance Field for 3D Representation and Rendering
GRF: Learning a General Radiance Field for 3D Representation and Rendering

GRF: Learning a General Radiance Field for 3D Representation and Rendering [Paper] [Video] GRF: Learning a General Radiance Field for 3D Representatio

 This repository contains the code for the CVPR 2020 paper
This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"

Differentiable Volumetric Rendering Paper | Supplementary | Spotlight Video | Blog Entry | Presentation | Interactive Slides | Project Page This repos

Comments
  • hello Here is my suggestion! Using

    hello Here is my suggestion! Using " torch_scatter()" rather than "for loop" during inference .

    Example:
    B, D, H, W = pred.shape rend_pred = pred.view(B, C, -1).scatter_(2, rend_result['points'].unsqueeze(1).expand(-1, C, -1), rend_result['rend'].float()) rend_pred = rend_pred.reshape((B, C, H, W))

    opened by lihaoming45 2
  • question:loss=0,seg=0,point=0

    question:loss=0,seg=0,point=0

    Thank you for your contribution, I in the use of the network to train their own data set, data format shall be carried out in accordance with the format of cityscapes only, use the command: python3. Main py configs/default yaml/output, the code can run, but the training of the output loss value, seg, point from the first epoch start all is 0 (I got the five categories, semantic segmentation), want to excuse me, what should be the problem?

    opened by gp458742574 0
  • What does

    What does "topk uncertainty points" mean?

    When running test_point_sampling.ipny in the test file, the "topk uncertainty points" in the file do not know what it means. Can you tell me?Thank you!!!

    opened by kangyisheng123456 0
  • In Class-PointHead: “During inference, subdivision uses N=8096” Why is 8096?

    In Class-PointHead: “During inference, subdivision uses N=8096” Why is 8096?

    Thanks for your contribution. Here are the code-comments in pointrend.py class: PointHead-inference: """ During inference, subdivision uses N=8096 (i.e., the number of points in the stride 16 map of a 1024×2048 image) """ i found this N=8096 in paper: "5.Experiments: SemanticSegmentation" and as same as your code variable N. I don't understand how to get this N. Is it related to the size of the input image?or maybe something else?

    opened by JintuZheng 1
Owner
AhnDW
mmmmm
AhnDW
The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

Website | ArXiv | Get Start | Video PIRenderer The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic

Ren Yurui 81 Sep 25, 2021
U-Net Implementation: Convolutional Networks for Biomedical Image Segmentation" using the Carvana Image Masking Dataset in PyTorch

U-Net Implementation By Christopher Ley This is my interpretation and implementation of the famous paper "U-Net: Convolutional Networks for Biomedical

Christopher Ley 1 Jan 6, 2022
Generative Query Network (GQN) in PyTorch as described in "Neural Scene Representation and Rendering"

Update 2019/06/24: A model trained on 10% of the Shepard-Metzler dataset has been added, the following notebook explains the main features of this mod

Jesper Wohlert 313 Dec 27, 2022
This is the official implementation code repository of Underwater Light Field Retention : Neural Rendering for Underwater Imaging (Accepted by CVPR Workshop2022 NTIRE)

Underwater Light Field Retention : Neural Rendering for Underwater Imaging (UWNR) (Accepted by CVPR Workshop2022 NTIRE) Authors: Tian Ye†, Sixiang Che

jmucsx 17 Dec 14, 2022
Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP

Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP Abstract: We introduce a method that allows to automatically se

Daniil Pakhomov 134 Dec 19, 2022
Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation

Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation. Generally, MAS methods register multiple atlases, i.e., medical images with corresponding labels, to a target image;

NanYoMy 13 Oct 9, 2022
Code for "Layered Neural Rendering for Retiming People in Video."

Layered Neural Rendering in PyTorch This repository contains training code for the examples in the SIGGRAPH Asia 2020 paper "Layered Neural Rendering

Google 154 Dec 16, 2022
NeRViS: Neural Re-rendering for Full-frame Video Stabilization

Neural Re-rendering for Full-frame Video Stabilization

Yu-Lun Liu 9 Jun 17, 2022
Neural Re-rendering for Full-frame Video Stabilization

NeRViS: Neural Re-rendering for Full-frame Video Stabilization Project Page | Video | Paper | Google Colab Setup Setup environment for [Yu and Ramamoo

Yu-Lun Liu 9 Jun 17, 2022
UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering

UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering This repository holds all the code and data for our recent work on

Mohamed El Banani 118 Dec 6, 2022