gym-anm is a framework for designing reinforcement learning (RL) environments that model Active Network Management (ANM) tasks in electricity distribution networks.

Overview

Gym-ANM

Documentation Status codecov CI (pip) CI (conda) License: MIT

gym-anm is a framework for designing reinforcement learning (RL) environments that model Active Network Management (ANM) tasks in electricity distribution networks. It is built on top of the OpenAI Gym toolkit.

The gym-anm framework was designed with one goal in mind: bridge the gap between research in RL and in the management of power systems. We attempt to do this by providing RL researchers with an easy-to-work-with library of environments that model decision-making tasks in power grids.

Papers:

Key features

  • Very little background in electricity systems modelling it required. This makes gym-anm an ideal starting point for RL students and researchers looking to enter the field.
  • The environments (tasks) generated by gym-anm follow the OpenAI Gym framework, with which a large part of the RL community is already familiar.
  • The flexibility of gym-anm, with its different customizable components, makes it a suitable framework to model a wide range of ANM tasks, from simple ones that can be used for educational purposes, to complex ones designed to conduct advanced research.

Documentation

Documentation is provided online at https://gym-anm.readthedocs.io/en/latest/.

Installation

Requirements

gym-anm requires Python 3.7+ and can run on Linux, MaxOS, and Windows.

We recommend installing gym-anm in a Python environment (e.g., virtualenv or conda).

Using pip

Using pip (preferably after activating your virtual environment):

pip install gym-anm

Building from source

Alternatively, you can build gym-anm directly from source:

git clone https://github.com/robinhenry/gym-anm.git
cd gym-anm
pip install -e .

Example

The following code snippet illustrates how gym-anm environments can be used. In this example, actions are randomly sampled from the action space of the environment ANM6Easy-v0. For more information about the agent-environment interface, see the official OpenAI Gym documentation.

import gym
import time

env = gym.make('gym_anm:ANM6Easy-v0')
o = env.reset()

for i in range(100):
    a = env.action_space.sample()
    o, r, done, info = env.step(a)
    env.render()
    time.sleep(0.5)  # otherwise the rendering is too fast for the human eye.

The above code would render the environment in your default web browser as shown in the image below: alt text

Additional example scripts can be found in examples/.

Testing the installation

All unit tests in gym-anm can be ran from the project root directory with:

python -m tests

Contributing

Contributions are always welcome! Please read the contribution guidelines first.

Citing the project

All publications derived from the use of gym-anm should cite the following two 2021 papers:

@article{HENRY2021100092,
    title = {Gym-ANM: Reinforcement learning environments for active network management tasks in electricity distribution systems},
    journal = {Energy and AI},
    volume = {5},
    pages = {100092},
    year = {2021},
    issn = {2666-5468},
    doi = {https://doi.org/10.1016/j.egyai.2021.100092},
    author = {Robin Henry and Damien Ernst},
}
@article{HENRY2021100092,
    title = {Gym-ANM: Open-source software to leverage reinforcement learning for power system management in research and education},
    journal = {Software Impacts},
    volume = {9},
    pages = {100092},
    year = {2021},
    issn = {2665-9638},
    doi = {https://doi.org/10.1016/j.simpa.2021.100092},
    author = {Robin Henry and Damien Ernst}
}

Maintainers

gym-anm is currently maintained by Robin Henry.

License

This project is licensed under the MIT License - see the LICENSE.md file for details.

Comments
  • Rendering Problem on Windows 10

    Rendering Problem on Windows 10

    When running the example 'gym_anm:ANM6Easy-v0' given in the quickstart section there is a problem when rendering the environment. The rendering tab that opens on the browser is blank.

    I am running windows 10 and I tried running the script on a Jupyter Notebook (Python 3.8.5), in Google Collab and in Pycharm (Python 3.9). The error log I am getting is:

     Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "C:\Users\diego\AppData\Local\Programs\Python\Python39\lib\multiprocessing\spawn.py", line 116, in spawn_main
        exitcode = _main(fd, parent_sentinel)
      File "C:\Users\diego\AppData\Local\Programs\Python\Python39\lib\multiprocessing\spawn.py", line 125, in _main
        prepare(preparation_data)
      File "C:\Users\diego\AppData\Local\Programs\Python\Python39\lib\multiprocessing\spawn.py", line 236, in prepare
        _fixup_main_from_path(data['init_main_from_path'])
      File "C:\Users\diego\AppData\Local\Programs\Python\Python39\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
        main_content = runpy.run_path(main_path,
      File "C:\Users\diego\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 268, in run_path
        return _run_module_code(code, init_globals, run_name,
      File "C:\Users\diego\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 97, in _run_module_code
        _run_code(code, mod_globals, init_globals,
      File "C:\Users\diego\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in _run_code
        exec(code, run_globals)
      File "C:\Users\diego\PycharmProjects\thesis\main.py", line 16, in <module>
        env.render()
      File "C:\Users\diego\PycharmProjects\thesis\venv\lib\site-packages\gym_anm\envs\anm6_env\anm6.py", line 92, in render
        self._init_render(specs)
      File "C:\Users\diego\PycharmProjects\thesis\venv\lib\site-packages\gym_anm\envs\anm6_env\anm6.py", line 188, in _init_render
        rendering.start(title, dev_type, ps, qs, branch_rate,
      File "C:\Users\diego\PycharmProjects\thesis\venv\lib\site-packages\gym_anm\envs\anm6_env\rendering\py\rendering.py", line 54, in start
        http_server = HttpServer()
      File "C:\Users\diego\PycharmProjects\thesis\venv\lib\site-packages\gym_anm\envs\anm6_env\rendering\py\servers.py", line 171, in __init__
        self.process = self._start_http_process()
      File "C:\Users\diego\PycharmProjects\thesis\venv\lib\site-packages\gym_anm\envs\anm6_env\rendering\py\servers.py", line 184, in _start_http_process
        service.start()
      File "C:\Users\diego\AppData\Local\Programs\Python\Python39\lib\multiprocessing\process.py", line 121, in start
        self._popen = self._Popen(self)
      File "C:\Users\diego\AppData\Local\Programs\Python\Python39\lib\multiprocessing\context.py", line 224, in _Popen
        return _default_context.get_context().Process._Popen(process_obj)
      File "C:\Users\diego\AppData\Local\Programs\Python\Python39\lib\multiprocessing\context.py", line 327, in _Popen
        return Popen(process_obj)
      File "C:\Users\diego\AppData\Local\Programs\Python\Python39\lib\multiprocessing\popen_spawn_win32.py", line 45, in __init__
        prep_data = spawn.get_preparation_data(process_obj._name)
      File "C:\Users\diego\AppData\Local\Programs\Python\Python39\lib\multiprocessing\spawn.py", line 154, in get_preparation_data
        _check_not_importing_main()
      File "C:\Users\diego\AppData\Local\Programs\Python\Python39\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main
        raise RuntimeError('''
    RuntimeError: 
            An attempt has been made to start a new process before the
            current process has finished its bootstrapping phase.
    
            This probably means that you are not using fork to start your
            child processes and you have forgotten to use the proper idiom
            in the main module:
    
                if __name__ == '__main__':
                    freeze_support()
                    ...
    
            The "freeze_support()" line can be omitted if the program
            is not going to be frozen to produce an executable.
    
    bug 
    opened by diegofz 2
  • ImportError while running tests

    ImportError while running tests

    When I run the test command: python -m tests I get the following error:

    ======================================================================
    ERROR: test_dcopf_agent (unittest.loader._FailedTest)
    ----------------------------------------------------------------------
    ImportError: Failed to import test module: test_dcopf_agent
    Traceback (most recent call last):
      File "/home/satan/miniconda3/envs/rl-algo-env/lib/python3.7/unittest/loader.py", line 434, in _find_test_path
        module = self._get_module_from_name(name)
      File "/home/satan/miniconda3/envs/rl-algo-env/lib/python3.7/unittest/loader.py", line 375, in _get_module_from_name
        __import__(name)
      File "/home/satan/Torch_Env_List/gym-anm/tests/test_dcopf_agent.py", line 6, in <module>
        from gym_anm import MPCAgent
    ImportError: cannot import name 'MPCAgent' from 'gym_anm' (/home/satan/Torch_Env_List/gym-anm/gym_anm/__init__.py)
    
    
    ----------------------------------------------------------------------
    Ran 82 tests in 10.757s
    
    FAILED (errors=1)
    
    opened by sprakashdash 2
  • AttributeError: 'numpy.random._generator.Generator' object has no attribute 'randint'

    AttributeError: 'numpy.random._generator.Generator' object has no attribute 'randint'

    I am running into the following issue in couple of places. I am fixing it by chaging np_random to np.random and using integers instead of randint. Is that correct?

    File C:\ProgramData\Anaconda3\envs\gym-anm\lib\site-packages\gym_anm\envs\anm6_env\anm6_easy.py:31, in ANM6Easy.init_state(self) 27 n_dev, n_gen, n_des = 7, 2, 1 29 state = np.zeros(2 * n_dev + n_des + n_gen + self.K) ---> 31 t_0 = self.np_random.randint(0, int(24 / self.delta_t)) 32 state[-1] = t_0 34 # Load (P, Q) injections.

    AttributeError: 'numpy.random._generator.Generator' object has no attribute 'randint'

    Line 31 in gym-anm/gym_anm/env/anm6_env/anm6_easy.py:

        def init_state(self):
            n_dev, n_gen, n_des = 7, 2, 1
    
            state = np.zeros(2 * n_dev + n_des + n_gen + self.K)
    
            t_0 = self.np_random.randint(0, int(24 / self.delta_t))
            state[-1] = t_0
    
    opened by sifatron 1
  • Add possibility to model shunt elements in the power grid simulator

    Add possibility to model shunt elements in the power grid simulator

    This issue will track the addition of shunt elements to the power grid simulator, just like MATPOWER and other simulation packages do.

    Background

    Shunt elements were not originally included in gym-anm because we didn't want to over-complicate things for beginners with little experience in power system modeling. However, it seems that the feature would be useful to a number of people.

    Feel free to react to this comment if you would like to see this feature added, too!

    Plan

    The goal is to add the possibility to model shunt elements in the power grid simulator. It will follow the same mathematical representation as used by MATPOWER and others: shunt elements (e.g., capacitors or inductors) will be modeled as a fixed impedance connected to ground at a specific bus.

    More precisely, the modifications should follow equations (3.7) and (3.13) of the MATPOWER official documentation.

    enhancement 
    opened by robinhenry 1
  • Update requirements

    Update requirements

    • Switch to using poetry (documentation)
    • Update CI checks
    • Run black on source code, and add black check to CI checks
    • Add a Release GitHub actions workflow for more easily publish to pypi
    opened by robinhenry 0
  • The scalability of large-scale nodes system

    The scalability of large-scale nodes system

    Based on gym-anm, I built my 118-node system, which had 153 devices, 92 loads and 54 units, but I found that the speed of state initialization was very slow. I'm not sure what went wrong. Could you give me some help?

    opened by Kim-369 0
  • Replace MPCAgent with MPCAgentConstant

    Replace MPCAgent with MPCAgentConstant

    Resolving ImportError by replacing MPCAgent with MPCAgentConstant to run python -m tests. The base class has not implemented forecast() definition, so importing in the init file is showing NotImplementedError()

    opened by sprakashdash 0
  • Rendering Problem (Blank Screen)

    Rendering Problem (Blank Screen)

    I am running the following code:

    import gym
    import time
    
    def run():
        env = gym.make('gym_anm:ANM6Easy-v0')
        o = env.reset()
        
        for i in range(100):
            a = env.action_space.sample()
            o, r, done, info = env.step(a)
            env.render()
            time.sleep(0.5)  # otherwise the rendering is too fast for the human eye.
        env.close()
    
    if __name__ == '__main__':
        run()
    

    I get a blank screen on my browser. Running on both Windows 10 and 11.

    opened by sifatron 1
  • Running speed of large-scale nodes

    Running speed of large-scale nodes

    Based on gym-anm, I built my 118-node system, which had 153 devices, 92 loads and 54 units, but I found that the speed of state initialization was very slow. I'm not sure what went wrong. Could you give me some help?

    opened by Kim-369 1
Releases(1.1.4)
  • 1.1.4(Nov 27, 2022)

  • 1.1.3(Nov 27, 2022)

  • 1.1.2(Nov 27, 2022)

  • 1.1.1(Nov 27, 2022)

  • 1.0.2(Nov 27, 2022)

    What's Changed

    • Replace MPCAgent with MPCAgentConstant by @sprakashdash in https://github.com/robinhenry/gym-anm/pull/2
    • Add if __name__ == ... guards to examples for windows multiprocessing bug by @robinhenry in https://github.com/robinhenry/gym-anm/pull/5

    New Contributors

    • @sprakashdash made their first contribution in https://github.com/robinhenry/gym-anm/pull/2

    Full Changelog: https://github.com/robinhenry/gym-anm/commits/1.0.2

    Source code(tar.gz)
    Source code(zip)
Owner
Robin Henry
Masters student working on the control and optimization of complex systems.
Robin Henry
Multi-Agent Reinforcement Learning for Active Voltage Control on Power Distribution Networks (MAPDN)

Multi-Agent Reinforcement Learning for Active Voltage Control on Power Distribution Networks (MAPDN) This is the implementation of the paper Multi-Age

Future Power Networks 83 Jan 6, 2023
CompilerGym is a library of easy to use and performant reinforcement learning environments for compiler tasks

CompilerGym is a library of easy to use and performant reinforcement learning environments for compiler tasks

Facebook Research 721 Jan 3, 2023
A general-purpose, flexible, and easy-to-use simulator alongside an OpenAI Gym trading environment for MetaTrader 5 trading platform (Approved by OpenAI Gym)

gym-mtsim: OpenAI Gym - MetaTrader 5 Simulator MtSim is a simulator for the MetaTrader 5 trading platform alongside an OpenAI Gym environment for rein

Mohammad Amin Haghpanah 184 Dec 31, 2022
Manipulation OpenAI Gym environments to simulate robots at the STARS lab

Manipulator Learning This repository contains a set of manipulation environments that are compatible with OpenAI Gym and simulated in pybullet. In par

STARS Laboratory 5 Dec 8, 2022
An open-access benchmark and toolbox for electricity price forecasting

epftoolbox The epftoolbox is the first open-access library for driving research in electricity price forecasting. Its main goal is to make available a

null 97 Dec 5, 2022
Reinforcement Learning with Q-Learning Algorithm on gym's frozen lake environment implemented in python

Reinforcement Learning with Q Learning Algorithm Q learning algorithm is trained on the gym's frozen lake environment. Libraries Used gym Numpy tqdm P

null 1 Nov 10, 2021
Trading Gym is an open source project for the development of reinforcement learning algorithms in the context of trading.

Trading Gym Trading Gym is an open-source project for the development of reinforcement learning algorithms in the context of trading. It is currently

Dimitry Foures 535 Nov 15, 2022
Plug-n-Play Reinforcement Learning in Python with OpenAI Gym and JAX

coax is built on top of JAX, but it doesn't have an explicit dependence on the jax python package. The reason is that your version of jaxlib will depend on your CUDA version.

null 128 Dec 27, 2022
PyBullet CartPole and Quadrotor environments—with CasADi symbolic a priori dynamics—for learning-based control and reinforcement learning

safe-control-gym Physics-based CartPole and Quadrotor Gym environments (using PyBullet) with symbolic a priori dynamics (using CasADi) for learning-ba

Dynamic Systems Lab 300 Dec 28, 2022
Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments (CoRL 2020)

Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments [Project website] [Paper] This project is a PyTorch

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 49 Nov 28, 2022
PyTorch implementations of deep reinforcement learning algorithms and environments

Deep Reinforcement Learning Algorithms with PyTorch This repository contains PyTorch implementations of deep reinforcement learning algorithms and env

Petros Christodoulou 4.7k Jan 4, 2023
PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

PHOTONAI is a high level python API for designing and optimizing machine learning pipelines. We've created a system in which you can easily select and

Medical Machine Learning Lab - University of Münster 57 Nov 12, 2022
PGPortfolio: Policy Gradient Portfolio, the source code of "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem"(https://arxiv.org/pdf/1706.10059.pdf).

This is the original implementation of our paper, A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem (arXiv:1706.1

Zhengyao Jiang 1.5k Dec 29, 2022
A lightweight Python-based 3D network multi-agent simulator. Uses a cell-based congestion model. Calculates risk, loudness and battery capacities of the agents. Suitable for 3D network optimization tasks.

AMAZ3DSim AMAZ3DSim is a lightweight python-based 3D network multi-agent simulator. It uses a cell-based congestion model. It calculates risk, battery

Daniel Hirsch 13 Nov 4, 2022
Unadversarial Examples: Designing Objects for Robust Vision

Unadversarial Examples: Designing Objects for Robust Vision This repository contains the code necessary to replicate the major results of our paper: U

Microsoft 93 Nov 28, 2022
Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering (NAACL 2021)

Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering Abstract In open-domain question answering (QA), retrieve-and-read mec

Clova AI Research 34 Apr 13, 2022
This repository contains the source code of our work on designing efficient CNNs for computer vision

Efficient networks for Computer Vision This repo contains source code of our work on designing efficient networks for different computer vision tasks:

Sachin Mehta 386 Nov 26, 2022
Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX

CQL-JAX This repository implements Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX (FLAX). Implementation is built on

Karush Suri 8 Nov 7, 2022
Reinforcement-learning - Repository of the class assignment questions for the course on reinforcement learning

DSE 314/614: Reinforcement Learning This repository containing reinforcement lea

Manav Mishra 4 Apr 15, 2022