UMPNet: Universal Manipulation Policy Network for Articulated Objects

Overview

UMPNet: Universal Manipulation Policy Network for Articulated Objects

Zhenjia Xu, Zhanpeng He, Shuran Song
Columbia University
Robotics and Automation Letters (RA-L) / ICRA 2022

Project Page | Video | arXiv

Overview

This repo contains the PyTorch implementation for paper "UMPNet: Universal Manipulation Policy Network for Articulated Objects".

teaser

Content

Prerequisites

The code is built with Python 3.6. Libraries are listed in requirements.txt and can be installed with pip by:

pip install -r requirements.txt

Data Preparation

Prepare object URDF and pretrained model.

Download, unzip, and organize as follows:

/umpnet
    /mobility_dataset
    /pretrained
    ...

Testing

Test with GUI

There are also two modes of testing: exploration and manipulation.

# Open-ended state exploration
python test_gui.py --mode exploration --category CATEGORY

# Goal conditioned manipulation
python test_gui.py --mode manipulation --category CATEGORY

Here CATEGORY can be chosen from:

  • training categories]: Refrigerator, FoldingChair, Laptop, Stapler, TrashCan, Microwave, Toilet, Window, StorageFurniture, Switch, Kettle, Toy
  • [Testing categories]: Box, Phone, Dishwasher, Safe, Oven, WashingMachine, Table, KitchenPot, Bucket, Door

teaser

Quantitative Evaluation

There are also two modes of testing: exploration and manipulation.

# Open-ended state exploration
python test_quantitative.py --mode exploration

# Goal conditioned manipulation
python test_quantitative.py --mode manipulation

By default, it will run quantitative evaluation for each category. You can modify pool_list(L91) to run evaluation for a specific category.

Training

Hyper-parameters mentioned in paper are provided in default arguments.

python train.py --exp EXP_NAME

Then a directory will be created at exp/EXP_NAME, in which checkpoints, visualization, and replay buffer will be stored.

BibTeX

@article{xu2022umpnet,
  title={UMPNet: Universal manipulation policy network for articulated objects},
  author={Xu, Zhenjia and Zhanpeng, He and Song, Shuran},
  journal={IEEE Robotics and Automation Letters},
  year={2022},
  publisher={IEEE}
}

License

This repository is released under the MIT license. See LICENSE for additional details.

Acknowledgement

You might also like...
ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction
ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction

ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction. NeurIPS 2021.

This is the source code for our ICLR2021 paper: Adaptive Universal Generalized PageRank Graph Neural Network.
This is the source code for our ICLR2021 paper: Adaptive Universal Generalized PageRank Graph Neural Network.

GPRGNN This is the source code for our ICLR2021 paper: Adaptive Universal Generalized PageRank Graph Neural Network. Hidden state feature extraction i

SeqTR: A Simple yet Universal Network for Visual Grounding
SeqTR: A Simple yet Universal Network for Visual Grounding

SeqTR This is the official implementation of SeqTR: A Simple yet Universal Network for Visual Grounding, which simplifies and unifies the modelling fo

Context Axial Reverse Attention Network for Small Medical Objects Segmentation
Context Axial Reverse Attention Network for Small Medical Objects Segmentation

CaraNet: Context Axial Reverse Attention Network for Small Medical Objects Segmentation This repository contains the implementation of a novel attenti

CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images
CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images

Code and result about CCAFNet(IEEE TMM) 'CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images' IEE

DRLib:A concise deep reinforcement learning library, integrating HER and PER for almost off policy RL algos.
DRLib:A concise deep reinforcement learning library, integrating HER and PER for almost off policy RL algos.

DRLib:A concise deep reinforcement learning library, integrating HER and PER for almost off policy RL algos A concise deep reinforcement learning libr

A mini library for Policy Gradients with Parameter-based Exploration, with reference implementation of the ClipUp optimizer  from NNAISENSE.
A mini library for Policy Gradients with Parameter-based Exploration, with reference implementation of the ClipUp optimizer from NNAISENSE.

PGPElib A mini library for Policy Gradients with Parameter-based Exploration [1] and friends. This library serves as a clean re-implementation of the

This project provides a stock market environment using OpenGym with Deep Q-learning and Policy Gradient.
This project provides a stock market environment using OpenGym with Deep Q-learning and Policy Gradient.

Stock Trading Market OpenAI Gym Environment with Deep Reinforcement Learning using Keras Overview This project provides a general environment for stoc

PGPortfolio: Policy Gradient Portfolio, the source code of "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem"(https://arxiv.org/pdf/1706.10059.pdf).

This is the original implementation of our paper, A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem (arXiv:1706.1

Comments
  • Replicating the results in papers -- Evaluation Details?

    Replicating the results in papers -- Evaluation Details?

    Dear authors, thanks for releasing the code and congrats on the RA-L acceptance! We are trying to replicate the results listed in Table II (goal-conditioned manipulation) because we want to test our model using the exact same pipeline you used. May I ask the details of evaluations you ran? For example, we are running the test_quantitative.py as-is with the default parameters (100 trials for each category). Does this suffice to replicate the results reported?

    Another thing is, how were the test pickle files in test_data generated? How are the sampled object pose and joint states distributed?

    Thanks.

    opened by harryzhangOG 4
  • partnet with other simulators

    partnet with other simulators

    Wondering in UMPNet while using PartNet, did you find it more effective than directly using it in SAPIEN over bullet? Also, did you also try using PartNet in robosuite / mujoco?

    Thanks for making this available, it is great work!

    opened by deevanshimall 2
  • Intution behine

    Intution behine "position_start_epoch" and "direction_start_epoch"

    Hi authors While studying the training code, I observed that "position_start_epoch" and "direction_start_epoch" arguments are set to 500 and 800 by default. Is there any reason behind it ?

    opened by ankitatiisc 1
  • codes for inferring object's articulation structure

    codes for inferring object's articulation structure

    Hi authors, thanks for sharing the great codebase! I'm especially interested in inferring an object's articulation structure from interactions, but it seems you haven't uploaded the codes related to that. Would it be possible to share those or could you tell me the corresponding codes if you have already uploaded them? Thanks!

    opened by keiohta 0
Owner
Columbia Artificial Intelligence and Robotics Lab
We develop algorithms that enable intelligent systems to learn from their interactions with the physical world to execute complex tasks and assist people
Columbia Artificial Intelligence and Robotics Lab
Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO)

V-MPO Simple code to demonstrate Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) in Pyt

Nugroho Dewantoro 9 Jun 6, 2022
Official Implementation of 'UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers' ICLR 2021(spotlight)

UPDeT Official Implementation of UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers (ICLR 2021 spotlight) The

hhhusiyi 96 Dec 22, 2022
ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge (ManiSkill Challenge), a large-scale learning-from-demonstrations benchmark for object manipulation.

ManiSkill-Learn ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge, a large-scale learning-from-dem

Hao Su's Lab, UCSD 48 Dec 30, 2022
ManipulaTHOR, a framework that facilitates visual manipulation of objects using a robotic arm

ManipulaTHOR: A Framework for Visual Object Manipulation Kiana Ehsani, Winson Han, Alvaro Herrasti, Eli VanderBilt, Luca Weihs, Eric Kolve, Aniruddha

AI2 65 Dec 30, 2022
SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements (CVPR 2021)

SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements (CVPR 2021) This repository contains the official PyTorch implementa

Qianli Ma 133 Jan 5, 2023
Code for "LASR: Learning Articulated Shape Reconstruction from a Monocular Video". CVPR 2021.

LASR Installation Build with conda conda env create -f lasr.yml conda activate lasr # install softras cd third_party/softras; python setup.py install;

Google 157 Dec 26, 2022
Code for Motion Representations for Articulated Animation paper

Motion Representations for Articulated Animation This repository contains the source code for the CVPR'2021 paper Motion Representations for Articulat

Snap Research 851 Jan 9, 2023
This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21

Deep Virtual Markers This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21 Getting Started Get sa

KimHyomin 45 Oct 7, 2022
A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021)

A-SDF: Learning Disentangled Signed Distance Functions for Articulated Shape Representation (ICCV 2021) This repository contains the official implemen

null 81 Dec 14, 2022
Pytorch implementation for A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose

A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose Paper | Website | Data A-NeRF: Articulated Neural Radiance F

Shih-Yang Su 172 Dec 22, 2022