Lipschitz-constrained Unsupervised Skill Discovery

Related tags

Deep Learning LSD
Overview

Lipschitz-constrained Unsupervised Skill Discovery

This repository is the official implementation of

The implementation is based on Unsupervised Skill Discovery with Bottleneck Option Learning and garage.

Visit our project page for more results including videos.

Requirements

Examples

Install requirements:

pip install -r requirements.txt
pip install -e .
pip install -e garaged

Ant with 2-D continuous skills:

python tests/main.py --run_group EXP --env ant --max_path_length 200 --dim_option 2 --common_lr 0.0001 --seed 0 --normalizer_type ant_preset --use_gpu 1 --traj_batch_size 20 --n_parallel 8 --n_epochs_per_eval 5000 --n_thread 1 --model_master_dim 1024 --record_metric_difference 0 --n_epochs_per_tb 100 --n_epochs_per_save 50000 --n_epochs_per_pt_save 5000 --n_epochs_per_pkl_update 1000 --eval_record_video 1 --n_epochs 200001 --spectral_normalization 1 --n_epochs_per_log 50 --discrete 0 --num_random_trajectories 200 --sac_discount 0.99 --alpha 0.01 --sac_lr_a -1 --lr_te 3e-05 --sac_scale_reward 0 --max_optimization_epochs 1 --trans_minibatch_size 2048 --trans_optimization_epochs 4 --eval_plot_axis -50 50 -50 50

Ant with 16 discrete skills:

python tests/main.py --run_group EXP --env ant --max_path_length 200 --dim_option 16 --common_lr 0.0001 --seed 0 --normalizer_type ant_preset --use_gpu 1 --traj_batch_size 20 --n_parallel 8 --n_epochs_per_eval 5000 --n_thread 1 --model_master_dim 1024 --record_metric_difference 0 --n_epochs_per_tb 100 --n_epochs_per_save 50000 --n_epochs_per_pt_save 5000 --n_epochs_per_pkl_update 1000 --eval_record_video 1 --n_epochs 200001 --spectral_normalization 1 --n_epochs_per_log 50 --discrete 1 --num_random_trajectories 200 --sac_discount 0.99 --alpha 0.003 --sac_lr_a -1 --lr_te 3e-05 --sac_scale_reward 0 --max_optimization_epochs 1 --trans_minibatch_size 2048 --trans_optimization_epochs 4 --eval_plot_axis -50 50 -50 50

Humanoid with 2-D continuous skills:

python tests/main.py --run_group EXP --env humanoid --max_path_length 1000 --dim_option 2 --common_lr 0.0003 --seed 0 --normalizer_type humanoid_preset --use_gpu 1 --traj_batch_size 5 --n_parallel 8 --n_epochs_per_eval 5000 --n_thread 1 --model_master_dim 1024 --record_metric_difference 0 --n_epochs_per_tb 100 --n_epochs_per_save 50000 --n_epochs_per_pt_save 5000 --n_epochs_per_pkl_update 1000 --eval_record_video 1 --n_epochs 200001 --spectral_normalization 1 --n_epochs_per_log 50 --discrete 0 --video_skip_frames 3 --num_random_trajectories 200 --sac_discount 0.99 --alpha 0.03 --sac_lr_a -1 --lr_te 0.0001 --lsd_alive_reward 0.03 --sac_scale_reward 0 --max_optimization_epochs 1 --trans_minibatch_size 2048 --trans_optimization_epochs 4 --sac_replay_buffer 1 --te_max_optimization_epochs 1 --te_trans_optimization_epochs 2

Humanoid with 16 discrete skills:

python tests/main.py --run_group EXP --env humanoid --max_path_length 1000 --dim_option 16 --common_lr 0.0003 --seed 0 --normalizer_type humanoid_preset --use_gpu 1 --traj_batch_size 5 --n_parallel 8 --n_epochs_per_eval 5000 --n_thread 1 --model_master_dim 1024 --record_metric_difference 0 --n_epochs_per_tb 100 --n_epochs_per_save 50000 --n_epochs_per_pt_save 5000 --n_epochs_per_pkl_update 1000 --eval_record_video 1 --n_epochs 200001 --spectral_normalization 1 --n_epochs_per_log 50 --discrete 1 --video_skip_frames 3 --num_random_trajectories 200 --sac_discount 0.99 --alpha 0.03 --sac_lr_a -1 --lr_te 0.0001 --lsd_alive_reward 0.03 --sac_scale_reward 0 --max_optimization_epochs 1 --trans_minibatch_size 2048 --trans_optimization_epochs 4 --sac_replay_buffer 1 --te_max_optimization_epochs 1 --te_trans_optimization_epochs 2

HalfCheetah with 8 discrete skills:

python tests/main.py --run_group EXP --env half_cheetah --max_path_length 200 --dim_option 8 --common_lr 0.0001 --seed 0 --normalizer_type half_cheetah_preset --use_gpu 1 --traj_batch_size 20 --n_parallel 8 --n_epochs_per_eval 5000 --n_thread 1 --model_master_dim 1024 --record_metric_difference 0 --n_epochs_per_tb 100 --n_epochs_per_save 50000 --n_epochs_per_pt_save 5000 --n_epochs_per_pkl_update 1000 --eval_record_video 1 --n_epochs 200001 --spectral_normalization 1 --n_epochs_per_log 50 --discrete 1 --num_random_trajectories 200 --sac_discount 0.99 --alpha 0.01 --sac_lr_a -1 --lr_te 3e-05 --sac_scale_reward 0 --max_optimization_epochs 1 --trans_minibatch_size 2048 --trans_optimization_epochs 4
You might also like...
A semismooth Newton method for elliptic PDE-constrained optimization
A semismooth Newton method for elliptic PDE-constrained optimization

sNewton4PDEOpt The Python module implements a semismooth Newton method for solving finite-element discretizations of the strongly convex, linear ellip

Prototypical python implementation of the trust-region algorithm presented in Sequential Linearization Method for Bound-Constrained Mathematical Programs with Complementarity Constraints by Larson, Leyffer, Kirches, and Manns.

Prototypical python implementation of the trust-region algorithm presented in Sequential Linearization Method for Bound-Constrained Mathematical Programs with Complementarity Constraints by Larson, Leyffer, Kirches, and Manns.

Official PyTorch implementation of the paper
Official PyTorch implementation of the paper "Deep Constrained Least Squares for Blind Image Super-Resolution", CVPR 2022.

Deep Constrained Least Squares for Blind Image Super-Resolution [Paper] This is the official implementation of 'Deep Constrained Least Squares for Bli

pytorch implementation of
pytorch implementation of "Contrastive Multiview Coding", "Momentum Contrast for Unsupervised Visual Representation Learning", and "Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination"

Unofficial implementation: MoCo: Momentum Contrast for Unsupervised Visual Representation Learning (Paper) InsDis: Unsupervised Feature Learning via N

Seach Losses of our paper 'Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search', accepted by ICLR 2021.
Seach Losses of our paper 'Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search', accepted by ICLR 2021.

CSE-Autoloss Designing proper loss functions for vision tasks has been a long-standing research direction to advance the capability of existing models

A PyTorch implementation of
A PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing"

A PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing" (WebConf 2021). Abstract In this work we propose Pathfind

Neighborhood Contrastive Learning for Novel Class Discovery

Neighborhood Contrastive Learning for Novel Class Discovery This repository contains the official implementation of our paper: Neighborhood Contrastiv

source code for https://arxiv.org/abs/2005.11248 "Accelerating Antimicrobial Discovery with Controllable Deep Generative Models and Molecular Dynamics"

Accelerating Antimicrobial Discovery with Controllable Deep Generative Models and Molecular Dynamics This work will be published in Nature Biomedical

Official implementation of
Official implementation of "A Unified Objective for Novel Class Discovery", ICCV2021 (Oral)

A Unified Objective for Novel Class Discovery This is the official repository for the paper: A Unified Objective for Novel Class Discovery Enrico Fini

Owner
Seohong Park
Seohong Park
ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge (ManiSkill Challenge), a large-scale learning-from-demonstrations benchmark for object manipulation.

ManiSkill-Learn ManiSkill-Learn is a framework for training agents on SAPIEN Open-Source Manipulation Skill Challenge, a large-scale learning-from-dem

Hao Su's Lab, UCSD 48 Dec 30, 2022
TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain Gait Recognition.

TraND This is the code for the paper "Jinkai Zheng, Xinchen Liu, Chenggang Yan, Jiyong Zhang, Wu Liu, Xiaoping Zhang and Tao Mei: TraND: Transferable

Jinkai Zheng 32 Apr 4, 2022
KeypointDeformer: Unsupervised 3D Keypoint Discovery for Shape Control

KeypointDeformer: Unsupervised 3D Keypoint Discovery for Shape Control Tomas Jakab, Richard Tucker, Ameesh Makadia, Jiajun Wu, Noah Snavely, Angjoo Ka

Tomas Jakab 87 Nov 30, 2022
Unsupervised Discovery of Object Radiance Fields

Unsupervised Discovery of Object Radiance Fields by Hong-Xing Yu, Leonidas J. Guibas and Jiajun Wu from Stanford University. arXiv link: https://arxiv

Hong-Xing Yu 148 Nov 30, 2022
Pytorch implementation of the unsupervised object discovery method LOST.

LOST Pytorch implementation of the unsupervised object discovery method LOST. More details can be found in the paper: Localizing Objects with Self-Sup

Valeo.ai 189 Dec 25, 2022
library for nonlinear optimization, wrapping many algorithms for global and local, constrained or unconstrained, optimization

NLopt is a library for nonlinear local and global optimization, for functions with and without gradient information. It is designed as a simple, unifi

Steven G. Johnson 1.4k Dec 25, 2022
Official implementation of the MM'21 paper Constrained Graphic Layout Generation via Latent Optimization

[MM'21] Constrained Graphic Layout Generation via Latent Optimization This repository provides the official code for the paper "Constrained Graphic La

Kotaro Kikuchi 73 Dec 27, 2022
PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models

This is the official implementation of the following paper: Torsten Scholak, Nathan Schucher, Dzmitry Bahdanau. PICARD - Parsing Incrementally for Con

ElementAI 217 Jan 1, 2023
PyTorch implementation of Constrained Policy Optimization

PyTorch implementation of Constrained Policy Optimization (CPO) This repository has a simple to understand and use implementation of CPO in PyTorch. A

Sapana Chaudhary 25 Dec 8, 2022
Locally Constrained Self-Attentive Sequential Recommendation

LOCKER This is the pytorch implementation of this paper: Locally Constrained Self-Attentive Sequential Recommendation. Zhankui He, Handong Zhao, Zhe L

Zhankui (Aaron) He 8 Jul 30, 2022