Pytorch Implementation for Dilated Continuous Random Field

Overview

DilatedCRF

Pytorch implementation for fully-learnable DilatedCRF.


If you find my work helpful, please consider our paper:

@article{Mo2022dilatedcrf,
    title={Dilated Continuous Random Field for Semantic Segmentation},  
    author={Xi Mo, Xiangyu Chen, Cuncong Zhong, Rui Li, Kaidong Li, Sajid Usman},
    booktitle={IEEE International Conference on Robotics and Automation}, 
    year={2022}  
}

Easy Setup

Please install these required packages by official guidance:

python >= 3.6
pytorch >= 1.0.0
torchvision
pillow
numpy

How to Use

1. Prepare dataset

  • Dowload suction-based-grasping-dataset.zip (1.6GB) [link]. Please cite relevant paper:
@article{zeng2018robotic, 
    title={Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching},  
    author={Zeng, Andy and Song, Shuran and Yu, Kuan-Ting and Donlon, Elliott and Hogan, Francois Robert and Bauza, Maria and Ma, Daolin and Taylor, Orion and Liu,     Melody and Romo, Eudald and Fazeli, Nima and Alet, Ferran and Dafle, Nikhil Chavan and Holladay, Rachel and Morona, Isabella and Nair, Prem Qu and Green, Druck and Taylor, Ian and Liu, Weber and Funkhouser, Thomas and Rodriguez, Alberto},  
    booktitle={Proceedings of the IEEE International Conference on Robotics and Automation}, 
    year={2018}  
}
  • Train your own semantic segmentation classifers on the suction dataset, generate training samples and test samples for DilatedCRF. You can also download my training set and test set (872MB) [link], extract the default folder dataset to the main directory.
    NOTE: Customized training and test samples must be organized the same as the default dataset format.

2. Train network

  • If you want to customize training process, modify utils/configuration.py parameters according to its instructions.

  • Train DilatedCRF use default dataset folder, or customized dataset path by -d argument.
    NOTE: checkpoints will be written to the default folder checkpoint.

    python DialatedCRF.py -train
    

    or restore training using the lattest .pt file stored in default folder checkpoint:

    python DialatedCRF.py -train -r
    

    or you may want to use specified checkpoint:

    python DialatedCRF.py -train -r -c path/to/your/ckpt
    

    Note that checkpoint file must match the parameter "SCALE" specified in utils/configuration.py. To specify customized dataset folder, use:

    python RGANet.py -train -d your/dataset/path
    

3. Validation

  • Complete dataset folder mentioned above and a valid checkpoint are required. You can download my checkpoint for "SCALE" = 0.25 (42.4MB) [link], be sure to adjust corresponding configurations beforehand. Then run:

    python DialatedCRF.py -v
    

    or you may specify dataset folder by -d:

    python DialatedCRF.py -v -d your/path/to/dataset/folder
    
  • Final results will be written to folder results. Metrics including Jaccard, F1-score, accuracy, etc., will be gathered as evaluation.txt in the folder results/evaluation


Contributed by Xi Mo,
License: Apache 2.0

You might also like...
Implementation of algorithms for continuous control (DDPG and NAF).

DEPRECATION This repository is deprecated and is no longer maintaned. Please see a more recent implementation of RL for continuous control at jax-sac.

Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains. Includes the official implementation of the Soft Actor-Critic algorithm.

Softlearning Softlearning is a deep reinforcement learning toolbox for training maximum entropy policies in continuous domains. The implementation is

This is the official implementation code repository of Underwater Light Field Retention : Neural Rendering for Underwater Imaging (Accepted by CVPR Workshop2022 NTIRE)
This is the official implementation code repository of Underwater Light Field Retention : Neural Rendering for Underwater Imaging (Accepted by CVPR Workshop2022 NTIRE)

Underwater Light Field Retention : Neural Rendering for Underwater Imaging (UWNR) (Accepted by CVPR Workshop2022 NTIRE) Authors: Tian Ye†, Sixiang Che

A bunch of random PyTorch models using PyTorch's C++ frontend
A bunch of random PyTorch models using PyTorch's C++ frontend

PyTorch Deep Learning Models using the C++ frontend Gettting started Clone the repo 1. https://github.com/mrdvince/pytorchcpp 2. cd fashionmnist or

An implementation for Neural Architecture Search with Random Labels (CVPR 2021 poster) on Pytorch.
An implementation for Neural Architecture Search with Random Labels (CVPR 2021 poster) on Pytorch.

Neural Architecture Search with Random Labels(RLNAS) Introduction This project provides an implementation for Neural Architecture Search with Random L

High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

MiraiML: asynchronous, autonomous and continuous Machine Learning in Python

MiraiML Mirai: future in japanese. MiraiML is an asynchronous engine for continuous & autonomous machine learning, built for real-time usage. Usage In

Learning Continuous Image Representation with Local Implicit Image Function
Learning Continuous Image Representation with Local Implicit Image Function

LIIF This repository contains the official implementation for LIIF introduced in the following paper: Learning Continuous Image Representation with Lo

High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Comments
  • Arbitrary Size Input

    Arbitrary Size Input

    Excellent coding implementation! This is much more manageable than the permutohedral lattice.

    I'm working on a W-Net model that I would like to integrate this into during training. However, the current coding appears to require a fixed and known input size for training, which seems like it would force the input size for testing to be the same size. I have inputs that may be multiple 2D dimensions, though all square images (H = W), and I am wondering if the code could be reworked to be size agnostic. I've been working on this idea for a few days, and I wanted to reach out and see if you had any insight. Any help you could provide in this would be greatly appreciated!

    opened by TrystanMayfaire 0
Owner
DunnoCoding_Plus
CODE HARD, LIVE HAPPY.
DunnoCoding_Plus
PyTorch version repo for CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

Study-CSRNet-pytorch This is the PyTorch version repo for CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

null 0 Mar 1, 2022
Cancer metastasis detection with neural conditional random field (NCRF)

NCRF Prerequisites Data Whole slide images Annotations Patch images Model Training Testing Tissue mask Probability map Tumor localization FROC evaluat

Baidu Research 731 Jan 1, 2023
Classify bird species based on their songs using SIamese Networks and 1D dilated convolutions.

The goal is to classify different birds species based on their songs/calls. Spectrograms have been extracted from the audio samples and used as features for classification.

Aditya Dutt 9 Dec 27, 2022
Official code for "Stereo Waterdrop Removal with Row-wise Dilated Attention (IROS2021)"

Stereo-Waterdrop-Removal-with-Row-wise-Dilated-Attention This repository includes official codes for "Stereo Waterdrop Removal with Row-wise Dilated A

null 29 Oct 1, 2022
Random-Afg - Afghanistan Random Old Idz Cloner Tools

AFGHANISTAN RANDOM OLD IDZ CLONER TOOLS Install $ apt update $ apt upgrade $ apt

MAHADI HASAN AFRIDI 5 Jan 26, 2022
PyTorch implementation for MINE: Continuous-Depth MPI with Neural Radiance Fields

MINE: Continuous-Depth MPI with Neural Radiance Fields Project Page | Video PyTorch implementation for our ICCV 2021 paper. MINE: Towards Continuous D

Zijian Feng 325 Dec 29, 2022
PyTorch implementation for Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition.

Stochastic CSLR This is the PyTorch implementation for the ECCV 2020 paper: Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuou

Zhe Niu 28 Dec 19, 2022
PyTorch Implementation of "Light Field Image Super-Resolution with Transformers"

LFT PyTorch implementation of "Light Field Image Super-Resolution with Transformers", arXiv 2021. [pdf]. Contributions: We make the first attempt to a

Squidward 62 Nov 28, 2022
A clean and robust Pytorch implementation of PPO on continuous action space.

PPO-Continuous-Pytorch I found the current implementation of PPO on continuous action space is whether somewhat complicated or not stable. And this is

XinJingHao 56 Dec 16, 2022
A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

張致強 14 Dec 2, 2022