PowerGridworld: A Framework for Multi-Agent Reinforcement Learning in Power Systems

Overview

PowerGridworld: A Framework for Multi-Agent Reinforcement Learning in Power Systems

ci_workflow codeql_workflow

Authors: David Biagioni, Xiangyu Zhang, Dylan Wald, Deepthi Vaidhynathan, Rhoit Chintala, Jennifer King, Ahmed S. Zamzam

Corresponding author: David Biagioni

All authors are with the National Renewable Energy Laboratory (NREL).

Description

PowerGridworld provides users with a lightweight, modular, and customizable framework for creating power-systems-focused, multi-agent Gym environments that readily integrate with existing training frameworks for reinforcement learning (RL). Although many frameworks exist for training multi-agent RL (MARL) policies, none can rapidly prototype and develop the environments themselves, especially in the context of heterogeneous (composite, multidevice) power systems where power flow solutions are required to define grid-level variables and costs. PowerGridworld is an opensource software package that helps to fill this gap. To highlight PowerGridworld’s key features, we include two case studies and demonstrate learning MARL policies using both OpenAI’s multi-agent deep deterministic policy gradient (MADDPG) and RLLib’s proximal policy optimization (PPO) algorithms. In both cases, at least some subset of agents incorporates elements of the power flow solution at each time step as part of their reward (negative cost) structures.

Please refer to our preprint on arXiv for more details. Data and run scripts used to generate figures in the preprint are available in the paper directory.

Basic installation instructions

Env setup:

conda create -n gridworld python=3.8 -y
conda activate gridworld

git clone [email protected]:NREL/PowerGridworld.git
cd PowerGridWorld
pip install -e .
pip install -r requirements.txt

Run the pytests to sanity check:

pytest tests/
pytests --nbmake examples/envs

Examples

Examples of running various environments and MARL training algorithms can be found in examples.

Funding Acknowledgement

This work was authored by the National Renewable Energy Laboratory (NREL), operated by Alliance for Sustainable Energy, LLC, for the U.S. Department of Energy (DOE) under Contract No. DE-AC36-08GO28308. This work was supported by the Laboratory Directed Research and Development (LDRD) Program at NREL.

Citation

If citing this work, please use the following:

@article{biagioni2021powergridworld,
  title={PowerGridworld: A Framework for Multi-Agent Reinforcement Learning in Power Systems},
  author={Biagioni, David and Zhang, Xiangyu and Wald, Dylan and Vaidhynathan, Deepthi and Chintala, Rohit and King, Jennifer and Zamzam, Ahmed S},
  journal={arXiv preprint arXiv:2111.05969},
  year={2021}
}
You might also like...
Pytorch modules for paralel models with same architecture. Ideal for multi agent-based systems
Pytorch modules for paralel models with same architecture. Ideal for multi agent-based systems

WideLinears Pytorch parallel Neural Networks A package of pytorch modules for fast paralellization of separate deep neural networks. Ideal for agent-b

A multi-entity Transformer for multi-agent spatiotemporal modeling.
A multi-entity Transformer for multi-agent spatiotemporal modeling.

baller2vec This is the repository for the paper: Michael A. Alcorn and Anh Nguyen. baller2vec: A Multi-Entity Transformer For Multi-Agent Spatiotempor

Multi-task Multi-agent Soft Actor Critic for SMAC

Multi-task Multi-agent Soft Actor Critic for SMAC Overview The CARE formulti-task: Multi-Task Reinforcement Learning with Context-based Representation

Trading and Backtesting environment for training reinforcement learning agent or simple rule base algo.
Trading and Backtesting environment for training reinforcement learning agent or simple rule base algo.

TradingGym TradingGym is a toolkit for training and backtesting the reinforcement learning algorithms. This was inspired by OpenAI Gym and imitated th

Deep Reinforcement Learning based Trading Agent for Bitcoin
Deep Reinforcement Learning based Trading Agent for Bitcoin

Deep Trading Agent Deep Reinforcement Learning based Trading Agent for Bitcoin using DeepSense Network for Q function approximation. For complete deta

Urban mobility simulations with Python3, RLlib (Deep Reinforcement Learning) and Mesa (Agent-based modeling)
Urban mobility simulations with Python3, RLlib (Deep Reinforcement Learning) and Mesa (Agent-based modeling)

Deep Reinforcement Learning for Smart Cities Documentation RLlib: https://docs.ray.io/en/master/rllib.html Mesa: https://mesa.readthedocs.io/en/stable

Minecraft agent to farm resources using reinforcement learning

BarnyardBot CS 175 group project using Malmo download BarnyardBot.py into the python examples directory and run 'python BarnyardBot.py' in the console

 COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping
COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping

COVINS -- A Framework for Collaborative Visual-Inertial SLAM and Multi-Agent 3D Mapping Version 1.0 COVINS is an accurate, scalable, and versatile vis

Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX
Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX

CQL-JAX This repository implements Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX (FLAX). Implementation is built on

Comments
  • Bump tensorflow from 1.8.0 to 2.5.2 in /examples/marl/openai

    Bump tensorflow from 1.8.0 to 2.5.2 in /examples/marl/openai

    Bumps tensorflow from 1.8.0 to 2.5.2.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.5.2

    Release 2.5.2

    This release introduces several vulnerability fixes:

    • Fixes a code injection issue in saved_model_cli (CVE-2021-41228)
    • Fixes a vulnerability due to use of uninitialized value in Tensorflow (CVE-2021-41225)
    • Fixes a heap OOB in FusedBatchNorm kernels (CVE-2021-41223)
    • Fixes an arbitrary memory read in ImmutableConst (CVE-2021-41227)
    • Fixes a heap OOB in SparseBinCount (CVE-2021-41226)
    • Fixes a heap OOB in SparseFillEmptyRows (CVE-2021-41224)
    • Fixes a segfault due to negative splits in SplitV (CVE-2021-41222)
    • Fixes segfaults and vulnerabilities caused by accesses to invalid memory during shape inference in Cudnn* ops (CVE-2021-41221)
    • Fixes a null pointer exception when Exit node is not preceded by Enter op (CVE-2021-41217)
    • Fixes an integer division by 0 in tf.raw_ops.AllToAll (CVE-2021-41218)
    • Fixes an undefined behavior via nullptr reference binding in sparse matrix multiplication (CVE-2021-41219)
    • Fixes a heap buffer overflow in Transpose (CVE-2021-41216)
    • Prevents deadlocks arising from mutually recursive tf.function objects (CVE-2021-41213)
    • Fixes a null pointer exception in DeserializeSparse (CVE-2021-41215)
    • Fixes an undefined behavior arising from reference binding to nullptr in tf.ragged.cross (CVE-2021-41214)
    • Fixes a heap OOB read in tf.ragged.cross (CVE-2021-41212)
    • Fixes a heap OOB read in all tf.raw_ops.QuantizeAndDequantizeV* ops (CVE-2021-41205)
    • Fixes an FPE in ParallelConcat (CVE-2021-41207)
    • Fixes FPE issues in convolutions with zero size filters (CVE-2021-41209)
    • Fixes a heap OOB read in tf.raw_ops.SparseCountSparseOutput (CVE-2021-41210)
    • Fixes vulnerabilities caused by incomplete validation in boosted trees code (CVE-2021-41208)
    • Fixes vulnerabilities caused by incomplete validation of shapes in multiple TF ops (CVE-2021-41206)
    • Fixes a segfault produced while copying constant resource tensor (CVE-2021-41204)
    • Fixes a vulnerability caused by unitialized access in EinsumHelper::ParseEquation (CVE-2021-41201)
    • Fixes several vulnerabilities and segfaults caused by missing validation during checkpoint loading (CVE-2021-41203)
    • Fixes an overflow producing a crash in tf.range (CVE-2021-41202)
    • Fixes an overflow producing a crash in tf.image.resize when size is large (CVE-2021-41199)
    • Fixes an overflow producing a crash in tf.tile when tiling tensor is large (CVE-2021-41198)
    • Fixes a vulnerability produced due to incomplete validation in tf.summary.create_file_writer (CVE-2021-41200)
    • Fixes multiple crashes due to overflow and CHECK-fail in ops with large tensor shapes (CVE-2021-41197)
    • Fixes a crash in max_pool3d when size argument is 0 or negative (CVE-2021-41196)
    • Fixes a crash in tf.math.segment_* operations (CVE-2021-41195)
    • Updates curl to 7.78.0 to handle CVE-2021-22922, CVE-2021-22923, CVE-2021-22924, CVE-2021-22925, and CVE-2021-22926.

    TensorFlow 2.5.1

    Release 2.5.1

    This release introduces several vulnerability fixes:

    • Fixes a heap out of bounds access in sparse reduction operations (CVE-2021-37635)
    • Fixes a floating point exception in SparseDenseCwiseDiv (CVE-2021-37636)
    • Fixes a null pointer dereference in CompressElement (CVE-2021-37637)
    • Fixes a null pointer dereference in RaggedTensorToTensor (CVE-2021-37638)
    • Fixes a null pointer dereference and a heap OOB read arising from operations restoring tensors (CVE-2021-37639)
    • Fixes an integer division by 0 in sparse reshaping (CVE-2021-37640)

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.5.2

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • 957590e Merge pull request #52873 from tensorflow-jenkins/relnotes-2.5.2-20787
    • 2e1d16d Update RELEASE.md
    • 2fa6dd9 Merge pull request #52877 from tensorflow-jenkins/version-numbers-2.5.2-192
    • 4807489 Merge pull request #52881 from tensorflow/fix-build-1-on-r2.5
    • d398bdf Disable failing test
    • 857ad5e Merge pull request #52878 from tensorflow/fix-build-1-on-r2.5
    • 6c2a215 Disable failing test
    • f5c57d4 Update version numbers to 2.5.2
    • e51f949 Insert release notes place-fill
    • 2620d2c Merge pull request #52863 from tensorflow/fix-build-3-on-r2.5
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 4
  • Bump notebook from 6.4.5 to 6.4.10

    Bump notebook from 6.4.5 to 6.4.10

    Bumps notebook from 6.4.5 to 6.4.10.

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Dave eagle tests

    Dave eagle tests

    Verified that rllib results on Eagle are qualitatively the same as reported in paper. Updated some documentation. Added notebook tests just sanity checking that no errors are raised when run.

    opened by davebiagioni 0
  • Dave eagle tests

    Dave eagle tests

    Verified that rllib results on Eagle are about the same after the refactor.
    Made some small updates to documentation. Added notebook tests (just sanity checking that no errors are raised).

    opened by davebiagioni 0
Releases(v0.0.1)
Owner
National Renewable Energy Laboratory
National Renewable Energy Laboratory
This project uses reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can learn to read tape. The project is dedicated to hero in life great Jesse Livermore.

Reinforcement-trading This project uses Reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can

Deepender Singla 1.4k Dec 22, 2022
A parallel framework for population-based multi-agent reinforcement learning.

MALib: A parallel framework for population-based multi-agent reinforcement learning MALib is a parallel framework of population-based learning nested

MARL @ SJTU 348 Jan 8, 2023
Learning to Communicate with Deep Multi-Agent Reinforcement Learning in PyTorch

Learning to Communicate with Deep Multi-Agent Reinforcement Learning This is a PyTorch implementation of the original Lua code release. Overview This

Minqi 297 Dec 12, 2022
Rethinking the Importance of Implementation Tricks in Multi-Agent Reinforcement Learning

RIIT Our open-source code for RIIT: Rethinking the Importance of Implementation Tricks in Multi-AgentReinforcement Learning. We implement and standard

null 405 Jan 6, 2023
Pytorch implementations of popular off-policy multi-agent reinforcement learning algorithms, including QMix, VDN, MADDPG, and MATD3.

Off-Policy Multi-Agent Reinforcement Learning (MARL) Algorithms This repository contains implementations of various off-policy multi-agent reinforceme

null 183 Dec 28, 2022
WarpDrive: Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning on a GPU

WarpDrive is a flexible, lightweight, and easy-to-use open-source reinforcement learning (RL) framework that implements end-to-end multi-agent RL on a single GPU (Graphics Processing Unit).

Salesforce 334 Jan 6, 2023
Official Implementation of 'UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers' ICLR 2021(spotlight)

UPDeT Official Implementation of UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers (ICLR 2021 spotlight) The

hhhusiyi 96 Dec 22, 2022
CityLearn Challenge Multi-Agent Reinforcement Learning for Intelligent Energy Management, 2020, PikaPika team

Citylearn Challenge This is the PyTorch implementation for PikaPika team, CityLearn Challenge Multi-Agent Reinforcement Learning for Intelligent Energ

bigAIdream projects 10 Oct 10, 2022
Multi-agent reinforcement learning algorithm and environment

Multi-agent reinforcement learning algorithm and environment [en/cn] Pytorch implements multi-agent reinforcement learning algorithms including IQL, Q

万鲲鹏 7 Sep 20, 2022
Offline Multi-Agent Reinforcement Learning Implementations: Solving Overcooked Game with Data-Driven Method

Overcooked-AI We suppose to apply traditional offline reinforcement learning technique to multi-agent algorithm. In this repository, we implemented be

Baek In-Chang 14 Sep 16, 2022