PyTorch code accompanying the paper "Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning" (NeurIPS 2021).

Related tags

Deep Learning HIGL
Overview

HIGL

This is a PyTorch implementation for our paper: Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning (NeurIPS 2021).

Our code is based on official implementation of HRAC (NeurIPS 2020) and Map-planner (NeurIPS 2019)

Installation

conda create -n higl python=3.6
conda activate higl
./install_all.sh

Also, to run the MuJoCo experiments, a license is required (see here).

Usage

Training & Evaluation

  • Point Maze
./scripts/point_maze_sparse.sh ${reward_shaping} ${timesteps} ${gpu} ${seed}
./scripts/point_maze_sparse.sh dense 5e5 0 2
./scripts/point_maze_sparse.sh sparse 5e5 0 2
  • Ant Maze (U-shape)
./scripts/higl_ant_maze_u.sh ${reward_shaping} ${timesteps} ${gpu} ${seed}
./scripts/higl_ant_maze_u.sh dense 10e5 0 2
./scripts/higl_ant_maze_u.sh sparse 10e5 0 2
  • Ant Maze (W-shape)
./scripts/higl_ant_maze_w.sh ${reward_shaping} ${timesteps} ${gpu} ${seed}
./scripts/higl_ant_maze_w.sh dense 10e5 0 2
./scripts/higl_ant_maze_w.sh sparse 10e5 0 2
  • Reacher & Pusher
./scripts/higl_fetch.sh ${env} ${timesteps} ${gpu} ${seed}
./scripts/higl_fetch.sh Reacher3D-v0 5e5 0 2
./scripts/higl_fetch.sh Pusher-v0 10e5 0 2
  • Stochastic Ant Maze (U-shape)
./scripts/higl_ant_maze_u_stoch.sh ${reward_shaping} ${timesteps} ${gpu} ${seed}
./scripts/higl_ant_maze_u_stoch.sh dense 10e5 0 2
./scripts/higl_ant_maze_u_stoch.sh sparse 10e5 0 2
You might also like...
This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21
This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21

Deep Virtual Markers This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21 Getting Started Get sa

PyTorch implementation of NeurIPS 2021 paper:
PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

PyTorch implementation for our NeurIPS 2021 Spotlight paper
PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short-Term Transformer for Online Action Detection".

Long Short-Term Transformer for Online Action Detection Introduction This is a PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short

Code for our NeurIPS 2021 paper  Mining the Benefits of Two-stage and One-stage HOI Detection
Code for our NeurIPS 2021 paper Mining the Benefits of Two-stage and One-stage HOI Detection

CDN Code for our NeurIPS 2021 paper "Mining the Benefits of Two-stage and One-stage HOI Detection". Contributed by Aixi Zhang*, Yue Liao*, Si Liu, Mia

Code to reproduce the experiments from our NeurIPS 2021 paper " The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective"

Code To run: python runner.py new --save SAVE_NAME --data PATH_TO_DATA_DIR --dataset DATASET --model model_name [options] --n 1000 - train - t

Companion code for the paper "An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence" (NeurIPS 2021)

ReLU-GP Residual (RGPR) This repository contains code for reproducing the following NeurIPS 2021 paper: @inproceedings{kristiadi2021infinite, title=

Code for our NeurIPS 2021 paper 'Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation'

Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation (NeurIPS 2021) Code for our NeurIPS 2021 paper 'Exploiting the Intri

This GitHub repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.'

About Repository This repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.' About Code

Source code of NeurIPS 2021 Paper ''Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration''

CaGCN This repo is for source code of NeurIPS 2021 paper "Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration". Paper L

Comments
  • unexpected keyword argument 'stochastic_xy'

    unexpected keyword argument 'stochastic_xy'

    Hello

    Thanks for sharing the code.

    I've got the following error when running the script:

    ./scripts/higl_ant_maze_u.sh dense 10e5 0 2 Logging in ./logs/higl Traceback (most recent call last): File "main.py", line 114, in run_higl(args) File "./HIGL/higl/train.py", line 130, in run_higl stochastic_sigma=args.stochastic_sigma), TypeError: make() got an unexpected keyword argument 'stochastic_xy'

    I greatly appreciate it if you could look into it. Thank you!

    opened by thwanguk 1
Owner
Junsu Kim
Ph.D. student @ ALINLAB, KAIST AI
Junsu Kim
Code accompanying "Learning What To Do by Simulating the Past", ICLR 2021.

Learning What To Do by Simulating the Past This repository contains code that implements the Deep Reward Learning by Simulating the Past (Deep RSLP) a

Center for Human-Compatible AI 24 Aug 7, 2021
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Facebook Research 296 Dec 29, 2022
Code accompanying our paper Feature Learning in Infinite-Width Neural Networks

Empirical Experiments in "Feature Learning in Infinite-width Neural Networks" This repo contains code to replicate our experiments (Word2Vec, MAML) in

Edward Hu 37 Dec 14, 2022
Code accompanying the paper "Wasserstein GAN"

Wasserstein GAN Code accompanying the paper "Wasserstein GAN" A few notes The first time running on the LSUN dataset it can take a long time (up to an

null 3.1k Jan 1, 2023
Code accompanying the paper "How Tight Can PAC-Bayes be in the Small Data Regime?"

How Tight Can PAC-Bayes be in the Small Data Regime? This is the code to reproduce all experiments for the following paper: @inproceedings{Foong:2021:

null 5 Dec 21, 2021
Code repository accompanying the paper "On Adversarial Robustness: A Neural Architecture Search perspective"

On Adversarial Robustness: A Neural Architecture Search perspective Preparation: Clone the repository: https://github.com/tdchaitanya/nas-robustness.g

Chaitanya Devaguptapu 4 Nov 10, 2022
Codes accompanying the paper "Learning Nearly Decomposable Value Functions with Communication Minimization" (ICLR 2020)

NDQ: Learning Nearly Decomposable Value Functions with Communication Minimization Note This codebase accompanies paper Learning Nearly Decomposable Va

Tonghan Wang 69 Nov 26, 2022
Datasets accompanying the paper ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers.

ConditionalQA Datasets accompanying the paper ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers. Disclaimer This dataset

null 2 Oct 14, 2021
Code accompanying "Dynamic Neural Relational Inference" from CVPR 2020

Code accompanying "Dynamic Neural Relational Inference" This codebase accompanies the paper "Dynamic Neural Relational Inference" from CVPR 2020. This

Colin Graber 48 Dec 23, 2022