A mini library for Policy Gradients with Parameter-based Exploration, with reference implementation of the ClipUp optimizer from NNAISENSE.

Overview

PGPElib

A mini library for Policy Gradients with Parameter-based Exploration [1] and friends.

This library serves as a clean re-implementation of the algorithms used in our relevant paper.

Introduction

PGPE is an algorithm for computing approximate policy gradients for Reinforcement Learning (RL) problems. pgpelib provides a clean, scalable and easily extensible implementation of PGPE, and also serves as a reference (re)implementation of ClipUp [2], an optimizer designed to work specially well with PGPE-style gradient estimation. Although they were developed in the context of RL, both PGPE and ClipUp are general purpose tools for solving optimization problems.

Here are some interesting RL agents trained in simulation with the PGPE+ClipUp implementation in pgpelib.

HumanoidBulletEnv-v0
Score: 4853
HumanoidBulletEnv-v0
Humanoid-v2
Score: 10184
Humanoid-v2
Walker2d-v2
Score: 5232
Walker2d-v2

Contents

What is PGPE?

PGPE is a derivative-free policy gradient estimation algorithm. More generally, it can be seen as a distribution-based evolutionary algorithm suitable for optimization in the domain of real numbers. With simple modifications to PGPE, one can also obtain similar algorithms like OpenAI-ES [3] and Augmented Random Search [7].

Please see the following animation for a visual explanation of how PGPE works.

The working principles of PGPE

Back to Contents


What is ClipUp?

ClipUp is a new optimizer (a gradient following algorithm) that we propose in [2] for use within distribution-based evolutionary algorithms such as PGPE. In [3, 4], it was shown that distribution-based evolutionary algorithms work well with adaptive optimizers. In those studies, the authors used the well-known Adam optimizer [5]. We argue that ClipUp is simpler and more intuitive, yet competitive with Adam. Please see our blog post and paper [2] for more details.

Back to Contents

Installation

Pre-requisites: swig is a pre-requisite for Box2D, a simple physics engine used for some RL examples. It can be installed either system-wide (using a package manager like apt) or using conda. Then you can install pgpelib using following commands:

# Install directly from GitHub
pip install git+https://github.com/nnaisense/pgpelib.git#egg=pgpelib

# Or install from source in editable mode (to run examples or to modify code)
git clone https://github.com/nnaisense/pgpelib.git
cd pgpelib
pip install -e .

If you wish to run experiments based on MuJoCo, you will need some additional setup. See this link for setup instructions.

Back to Contents

Usage

To dive into executable code examples, please see the examples directory. Below we give a very quick tutorial on how to use pgpelib for optimization.

Basic usage

pgpelib provides an ask-and-tell interface for optimization, similar to [4, 6]. The general principle is to repeatedly ask the optimizer for candidate solutions to evaluate, and then tell it the corresponding fitness values so it can update the current solution or population. Using this interface, a typical communication with the solver is as follows:

from pgpelib import PGPE
import numpy as np

pgpe = PGPE(
    solution_length=5,   # A solution vector has the length of 5
    popsize=20,          # Our population size is 20

    #optimizer='clipup',          # Uncomment these lines if you
    #optimizer_config = dict(     # would like to use the ClipUp
    #    max_speed=...,           # optimizer.
    #    momentum=0.9
    #),

    #optimizer='adam',            # Uncomment these lines if you
    #optimizer_config = dict(     # would like to use the Adam
    #    beta1=0.9,               # optimizer.
    #    beta2=0.999,
    #    epsilon=1e-8
    #),

    ...
)

# Let us run the evolutionary computation for 1000 generations
for generation in range(1000):

    # Ask for solutions, which are to be given as a list of numpy arrays.
    # In the case of this example, solutions is a list which contains
    # 20 numpy arrays, the length of each numpy array being 5.
    solutions = pgpe.ask()

    # This is the phase where we evaluate the solutions
    # and prepare a list of fitnesses.
    # Make sure that fitnesses[i] stores the fitness of solutions[i].
    fitnesses = [...]  # compute the fitnesses here

    # Now we tell the result of our evaluations, fitnesses,
    # to our solver, so that it updates the center solution
    # and the spread of the search distribution.
    pgpe.tell(fitnesses)

# After 1000 generations, we print the center solution.
print(pgpe.center)

pgpelib also supports adaptive population sizes, where additional solutions are sampled from the current search distribution and evaluated until a certain number of total simulator interactions (i.e. timesteps) is reached. Use of this technique can be enabled by specifying the num_interactions parameter, as demonstrated by the following snippet:

pgpe = PGPE(
    solution_length=5,      # Our RL policy has 5 trainable parameters.
    popsize=20,             # Our base population size is 20.
                            # After evaluating a batch of 20 policies,
                            # if we do not reach our threshold of
                            # simulator interactions, we will keep sampling
                            # and evaluating more solutions, 20 at a time,
                            # until the threshold is finally satisfied.

    num_interactions=17500, # Threshold for simulator interactions.
    ...
)

# Let us run the evolutionary computation for 1000 generations
for generation in range(1000):

    # We begin the inner loop of asking for new solutions,
    # until the threshold of interactions count is reached.
    while True:

        # ask for new policies to evaluate in the simulator
        solutions = pgpe.ask()

        # This is the phase where we evaluate the policies,
        # and prepare a list of fitnesses and a list of
        # interaction counts.
        # Make sure that:
        #   fitnesses[i] stores the fitness of solutions[i];
        #   interactions[i] stores the number of interactions
        #       made with the simulator while evaluating the
        #       i-th solution.
        fitnesses = [...]
        interactions = [...]

        # Now we tell the result of our evaluations
        # to our solver, so that it updates the center solution
        # and the spread of the search distribution.
        interaction_limit_reached = pgpe.tell(fitnesses, interactions)

        # If the limit on number of interactions per generation is reached,
        # pgpelib has already updated the search distribution internally.
        # So we can stop creating new solutions and end this generation.
        if interaction_limit_reached:
            break

# After 1000 generations, we print the center solution (policy).
print(pgpe.center)

Parallelization

Ease of parallelization is a massive benefit of evolutionary search techniques. pgpelib is thoughtfully agnostic when it comes to parallelization: the choice of tool used for parallelization is left to the user. We provide thoroughly documented examples of using either multiprocessing or ray for parallelizing evaluations across multiple cores on a single machine or across multiple machines. The ray example additionally demonstrates use of observation normalization when training RL agents.

Training RL agents

This repository also contains a Python script for training RL agents. The training script is configurable and executable from the command line. See the train_agents directory. Some pre-trained RL agents are also available for visualization in the agents directory.

Back to Contents

License

Please see: LICENSE.

The files optimizers.py, runningstat.py, and ranking.py contain codes adapted from OpenAI's evolution-strategies-starter repository. The license terms of those adapted codes can be found in their files.

Back to Contents

References

[1] Sehnke, F., Osendorfer, C., Rückstieß, T., Graves, A., Peters, J., & Schmidhuber, J. (2010). Parameter-exploring policy gradients. Neural Networks, 23(4), 551-559.

[2] Toklu, N.E., Liskowski, P., & Srivastava, R.K. (2020). ClipUp: A Simple and Powerful Optimizer for Distribution-based Policy Evolution. 16th International Conference on Parallel Problem Solving from Nature (PPSN 2020).

[3] Salimans, T., Ho, J., Chen, X., Sidor, S., & Sutskever, I. (2017). Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864.

[4] Ha, D. (2017). A Visual Guide to Evolution Strategies.

[5] Kingma, D.P., & Ba, J. (2015). Adam: A method for stochastic optimization. In Proceedings of 3rd International Conference on Learning Representations (ICLR 2015).

[6] Hansen, N., Akimoto, Y., Baudis, P. (2019). CMA-ES/pycma on Github. Zenodo, DOI:10.5281/zenodo.2559634, February 2019.

[7] Mania, H., Guy, A., & Recht, B. (2018). Simple random search provides a competitive approach to reinforcement learning arXiv preprint arXiv:1803.07055.

Back to Contents

Citation

If you use this code, please cite us in your repository/paper as:

Toklu, N. E., Liskowski, P., & Srivastava, R. K. (2020, September). ClipUp: A Simple and Powerful Optimizer for Distribution-Based Policy Evolution. In International Conference on Parallel Problem Solving from Nature (pp. 515-527). Springer, Cham.

Bibtex:

@inproceedings{toklu2020clipup,
  title={ClipUp: A Simple and Powerful Optimizer for Distribution-Based Policy Evolution},
  author={Toklu, Nihat Engin and Liskowski, Pawe{\l} and Srivastava, Rupesh Kumar},
  booktitle={International Conference on Parallel Problem Solving from Nature},
  pages={515--527},
  year={2020},
  organization={Springer}
}

Back to Contents

Acknowledgements

We are thankful to developers of these tools for inspiring this implementation.

Back to Contents

You might also like...
Python Library for learning (Structure and Parameter) and inference (Statistical and Causal) in Bayesian Networks.

pgmpy pgmpy is a python library for working with Probabilistic Graphical Models. Documentation and list of algorithms supported is at our official sit

Pynomial - a lightweight python library for implementing the many confidence intervals for the risk parameter of a binomial model

Pynomial - a lightweight python library for implementing the many confidence intervals for the risk parameter of a binomial model

Based on Yolo's low-power, ultra-lightweight universal target detection algorithm, the parameter is only 250k, and the speed of the smart phone mobile terminal can reach ~300fps+
Based on Yolo's low-power, ultra-lightweight universal target detection algorithm, the parameter is only 250k, and the speed of the smart phone mobile terminal can reach ~300fps+

Based on Yolo's low-power, ultra-lightweight universal target detection algorithm, the parameter is only 250k, and the speed of the smart phone mobile terminal can reach ~300fps+

Implementation of the πŸ˜‡ Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones
Implementation of the πŸ˜‡ Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones

HaloNet - Pytorch Implementation of the Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones. This re

πŸ€ Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐
πŸ€ Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐

πŸ€ Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐

Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"

Prompt-Tuning Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning" Currently, we support the following huggigface models: Bart

Ranger deep learning optimizer rewrite to use newest components
Ranger deep learning optimizer rewrite to use newest components

Ranger21 - integrating the latest deep learning components into a single optimizer Ranger deep learning optimizer rewrite to use newest components Ran

auto-tuning momentum SGD optimizer

YellowFin YellowFin is an auto-tuning optimizer based on momentum SGD which requires no manual specification of learning rate and momentum. It measure

Ranger - a synergistic optimizer using RAdam (Rectified Adam), Gradient Centralization and LookAhead in one codebase
Ranger - a synergistic optimizer using RAdam (Rectified Adam), Gradient Centralization and LookAhead in one codebase

Ranger-Deep-Learning-Optimizer Ranger - a synergistic optimizer combining RAdam (Rectified Adam) and LookAhead, and now GC (gradient centralization) i

PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization using Augmented-Self Reference and Dense Semantic Correspondence) and pre-trained model on ImageNet dataset

Reference-Based-Sketch-Image-Colorization-ImageNet This is a PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization usin

Yuzhi ZHAO 11 Jul 28, 2022
Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO)

V-MPO Simple code to demonstrate Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) in Pyt

Nugroho Dewantoro 9 Jun 6, 2022
This repository contains numerical implementation for the paper Intertemporal Pricing under Reference Effects: Integrating Reference Effects and Consumer Heterogeneity.

This repository contains numerical implementation for the paper Intertemporal Pricing under Reference Effects: Integrating Reference Effects and Consumer Heterogeneity.

Hansheng Jiang 6 Nov 18, 2022
GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning

GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding mitigation strategies.

null 129 Dec 30, 2022
This is the official implementation of TrivialAugment and a mini-library for the application of multiple image augmentation strategies including RandAugment and TrivialAugment.

Trivial Augment This is the official implementation of TrivialAugment (https://arxiv.org/abs/2103.10158), as was used for the paper. TrivialAugment is

AutoML-Freiburg-Hannover 94 Dec 30, 2022
Implement Decoupled Neural Interfaces using Synthetic Gradients in Pytorch

disclaimer: this code is modified from pytorch-tutorial Image classification with synthetic gradient in Pytorch I implement the Decoupled Neural Inter

Andrew 114 Dec 22, 2022
Discretized Integrated Gradients for Explaining Language Models (EMNLP 2021)

Discretized Integrated Gradients for Explaining Language Models (EMNLP 2021) Overview of paths used in DIG and IG. w is the word being attributed. The

INK Lab @ USC 17 Oct 27, 2022
Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness

Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness Code for Paper "Imbalanced Gradients: A Subtle Cause of Overestimated Adv

Hanxun Huang 11 Nov 30, 2022
PyTorch implementation DRO: Deep Recurrent Optimizer for Structure-from-Motion

DRO: Deep Recurrent Optimizer for Structure-from-Motion This is the official PyTorch implementation code for DRO-sfm. For technical details, please re

Alibaba Cloud 56 Dec 12, 2022
This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras)

Yogi-Optimizer_Keras This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras) The NeurIPS-Paper can be found here: http://papers.nips.c

null 14 Sep 13, 2022