CompilerGym is a library of easy to use and performant reinforcement learning environments for compiler tasks

Overview

CompilerGym

PyPI version PyPi Downloads License CI status Colab

Reinforcement learning environments for compiler optimization tasks.

Check the website for more information.

Introduction

CompilerGym is a library of easy to use and performant reinforcement learning environments for compiler tasks. It allows ML researchers to interact with important compiler optimization problems in a language and vocabulary with which they are comfortable, and provides a toolkit for systems developers to expose new compiler tasks for ML research. We aim to act as a catalyst for making compilers faster using ML. Key features include:

  • Ease of use: built on the the popular Gym interface - use Python to write your agent. With CompilerGym, building ML models for compiler research problems is as easy as building ML models to play video games.

  • Batteries included: includes everything required to get started. Wraps real world programs and compilers to provide millions of instances for training. Provides multiple kinds of pre-computed program representations: you can focus on end-to-end deep learning or features + boosted trees, all the way up to graph models. Appropriate reward functions and loss functions for optimization targets are provided out of the box.

  • Reproducible: provides validation for correctness of results, common baselines, and leaderboards for you to submit your results.

For a glimpse of what's to come, check out our roadmap.

Installation

Install the latest CompilerGym release using:

pip install -U compiler_gym

See INSTALL.md for further details.

Usage

Starting with CompilerGym is simple. If you not already familiar with the gym interface, refer to the getting started guide for an overview of the key concepts.

In Python, import compiler_gym to use the environments:

>> env.reset() # starts a new compilation session >>> env.render() # prints the IR of the program >>> env.step(env.action_space.sample()) # applies a random optimization, updates state/reward/actions ">
>>> import gym
>>> import compiler_gym                      # imports the CompilerGym environments
>>> env = gym.make(                          # creates a new environment
...     "llvm-v0",                           # selects the compiler to use
...     benchmark="cbench-v1/qsort",         # selects the program to compile
...     observation_space="Autophase",       # selects the observation space
...     reward_space="IrInstructionCountOz", # selects the optimization target
... )
>>> env.reset()                              # starts a new compilation session
>>> env.render()                             # prints the IR of the program
>>> env.step(env.action_space.sample())      # applies a random optimization, updates state/reward/actions

See the documentation website for tutorials, further details, and API reference. See the examples directory for pytorch integration, agent implementations, etc.

Leaderboards

These leaderboards track the performance of user-submitted algorithms for CompilerGym tasks. To submit a result please see this document.

LLVM Instruction Count

LLVM is a popular open source compiler used widely in industry and research. The llvm-ic-v0 environment exposes LLVM's optimizing passes as a set of actions that can be applied to a particular program. The goal of the agent is to select the sequence of optimizations that lead to the greatest reduction in instruction count in the program being compiled. Reward is the reduction in instruction count achieved scaled to the reduction achieved by LLVM's builtin -Oz pipeline.

This leaderboard tracks the results achieved by algorithms on the llvm-ic-v0 environment on the 23 benchmarks in the cbench-v1 dataset.

Author Algorithm Links Date Walltime (mean) Codesize Reduction (geomean)
Facebook Random search (t=10800) write-up, results 2021-03 10,512.356s 1.062×
Facebook Random search (t=3600) write-up, results 2021-03 3,630.821s 1.061×
Facebook Greedy search write-up, results 2021-03 169.237s 1.055×
Facebook Random search (t=60) write-up, results 2021-03 91.215s 1.045×
Facebook e-Greedy search (e=0.1) write-up, results 2021-03 152.579s 1.041×
Jiadong Guo Tabular Q (N=5000, H=10) write-up, results 2021-04 2534.305 1.036×
Facebook Random search (t=10) write-up, results 2021-03 42.939s 1.031×
Patrick Hesse DQN (N=4000, H=10) write-up, results 2021-06 91.018s 1.029×
Jiadong Guo Tabular Q (N=2000, H=5) write-up, results 2021-04 694.105 0.988×

Contributing

We welcome contributions to CompilerGym. If you are interested in contributing please see this document.

Citation

If you use CompilerGym in any of your work, please cite our paper:

@article{CompilerGym,
      title={{CompilerGym: Robust, Performant Compiler Optimization Environments for AI Research}},
      author={Chris Cummins and Bram Wasti and Jiadong Guo and Brandon Cui and Jason Ansel and Sahir Gomez and Somya Jain and Jia Liu and Olivier Teytaud and Benoit Steiner and Yuandong Tian and Hugh Leather},
      journal={arXiv:2109.08267},
      year={2021},
}
Comments
  • Add Building with CMake

    Add Building with CMake

    This is a WIP of the CMake building.

    This PR supersede https://github.com/facebookresearch/CompilerGym/pull/478.

    Currently the building of the compiler_gym directory is done. Building of tests, benchmarks, examples and the python package itself lies ahead. There is a bit of cleanup and renaming to do after that. Some of the functionality form the auto-conversion from Bazel to CMake may prove to be unnecessary.

    I plan not to use CMake's ctest functionality for tests, since it is not a part of the building itself. I will make the tests and benchmarks proper build targets.

    CLA Signed 
    opened by sogartar 24
  • Refactor out Env interface from CompilerEnv

    Refactor out Env interface from CompilerEnv

    The goal of this change is to have a clean separation between interface and implementation. It also allows for new environment implementations with approaches different from CompilerEnv.

    CLA Signed 
    opened by sogartar 23
  • Create an MLIR environment with matrix multiplication

    Create an MLIR environment with matrix multiplication

    This also adds config flags to enable the MLIR and LLVM environments. They are mutually exclusive due to their dependence on different LLVM versions and the problem of their coexistence in one CMake configuration. Enabling them is available through CMake with the flags

    COMPILER_GYM_ENABLE_MLIR_ENV
    COMPILER_GYM_ENABLE_LLVM_ENV
    

    The LLVM env is enabled by default. These flags are propagated to python where sub-modules are included conditionally.

    CLA Signed 
    opened by sogartar 20
  • Nested spaces in gRPC

    Nested spaces in gRPC

    ❓ Questions and Help

    It looks like that the proto definitions of action space, action, observation space and observation do not support nesting in. On the other hand OpenAI Gym's interface has Dict and by extension CompilerGym has also Dict. That makes deep hierarchies possible there.

    Does It make sense to extend the gRPC interface to support this type of structure?

    Additional Context

    This is not a very pressing issue as currently the structure I want to represent is simple enough, that I can use flat names like a.b and a.c. In the future there may be the need for more complex structures.

    opened by sogartar 19
  • Enable and Fix Environment Examples in CMake CI

    Enable and Fix Environment Examples in CMake CI

    Fixes #631

    • ensured that the example scripts are invoked in CMake CI by creating example_without_bazel_test.py scripts
    • ensured that unit tests in env_tests.py are invoked in CMake CI by copying them into env_without_bazel_test.py
    • disabled the tests cases in Linux and MacOS bazel builds for now (hopefully will fix that with #707 )
    CLA Signed 
    opened by mostafaelhoushi 18
  • Is running CompilerGym intended to leave cache directories behind?

    Is running CompilerGym intended to leave cache directories behind?

    ❓ Questions and Help

    Not sure if this is a bug or not, so submitting as a question. Running a CompilerGym experiment leaves behind many cache directories. When running a large experiment, this can create problems through the sheer number of directories in COMPILER_GYM_CACHE. I expected the COMPILER_GYM_CACHE to not have anything after the experiment exited cleanly.

    Is there a way to avoid the experiments leaving the directories behind?

    Steps to reproduce

    Running the following on my machine leaves behind about 270 cache directories.

    import compiler_gym
    import compiler_gym.wrappers
    from ray import tune
    from ray.rllib.agents.ppo import PPOTrainer
    
    
    def make_env(env_config):
        env = compiler_gym.make(env_config['cgym_id'])
        env = compiler_gym.wrappers.TimeLimit(env, env_config['timelimit'])
        dataset = env.datasets[env_config['dataset']]
        env = compiler_gym.wrappers.CycleOverBenchmarks(
            env, dataset.benchmarks())
        return env
    
    
    config = {
        "env_config": {
            "cgym_id": "llvm-autophase-ic-v0",
            "timelimit": 45,
            "dataset": "benchmark://cbench-v1",
        },
        "env": "CompilerGym",
    }
    
    stop = {
        "timesteps_total": 10_000,
    }
    
    tune.register_env("CompilerGym", make_env)
    tune.run(
        PPOTrainer,
        config=config,
        stop=stop,
        name='cgym_cache_dir_demo',
    )
    

    Environment

    Please fill in this checklist:

    • CompilerGym: 0.2.2
    • How you installed CompilerGym (conda, pip, source): pip
    • OS: Ubuntu 20.04.1 LTS (x86_64)
    • Python version: 3.9.7
    • Build command you used (if compiling from source): N/A
    • GCC/clang version (if compiling from source): N/A
    • Bazel version (if compiling from source): N/A
    • Versions of any other relevant libraries: ray: 1.10.0, gym: 0.20.0
    Question 
    opened by vuoristo 16
  • gRPC refactoring of actions and observations

    gRPC refactoring of actions and observations

    Fixes #526

    This is a draft of the protobuf message definitions. It makes actions and observations be more like the CompilerGym Python environment interface.

    @ChrisCummins, could you take a look and see if this structure is appropriate. After I address your remarks I will proceed to refactor the existing environments to use the new structure.

    CLA Signed 
    opened by sogartar 15
  • Implement building with CMake

    Implement building with CMake

    For context, I believe Anush (@powderluv) has already been in discussion with you on subject of our proposed contributions of a CMake build system and implementing an MLIR environment, where CMake is a soft but extremely useful requisite for the MLIR environment.

    I've put out this pull request before finishing implementation and clean up on this commit to solicit feedback and to hopefully save some effort on our part (Nod.ai). These changes are not completely ready as is.

    In our development of the CMake build system for CompilerGym, we've borrowed a system from another library that can be used to automatically (partially) migrate bazel BUILD files into equivalent CMakeLists. So, one major reason we're looking for feedback before finishing is to make sure this format (copying Bazel style into CMake instead of using idiomatic CMake) will be fine as doing so will save us a lot of effort.

    Additionally, if there's any patterns that you know that you'd like us to avoid, especially ones that you see us using in this pull request, please inform us so that we can implement your style requests and hopefully save ourselves from rewriting this PR once or twice.

    CLA Signed 
    opened by KyleHerndon 15
  • Split CompilerEnv.step() into two methods for singular or lists of actions

    Split CompilerEnv.step() into two methods for singular or lists of actions

    CompilerEnv.step() currently accepts two types for the "action" argument:

    (1) a scalar action:

    >>> env.step(action)
    

    (2) an iterable of actions:

    >>> env.step([action1, action2])
    

    This PR splits this overloaded behavior into two methods: CompilerEnv.step() only takes a single action, and CompilerEnv.multistep() only takes an iterable sequence of actions:

    >>> env.step(action)
    >>> env.multistep([action1, action2])
    

    For now, calling CompilerEnv.step() with a list of actions still works, though with a deprecation warning. In the v0.2.4 release support for lists of actions in CompilerEnv.step() will be removed.

    Benefits

    Passing a list of actions to execute in a single step enables them to be executed in a single RPC invocation, significantly reducing the overhead of round trips to the backend, and removing the need to calculate observation/rewards for each individual step. We measured speedups of ~3x on typical LLVM workloads using this (more details here).

    Drawbacks

    Adding a new method means we probably need to refactor all step wrappers to overload the multistep() wrappers, as otherwise this could be missed:

    class MyStepWrapper(Wrapper):
      def step(action):
        # .. my overload
    
    env = MyStepWrapper(env)
    env.multistep(actions)   # oh no! Not using the overload
    

    Instead, we can require that wrappers that wish to change the behavior of an environments step function override the raw_step() method, as that method is the common denominator between step() and multistep(), and also has a tighter function signature with less room for mixing and matching different arg types.

    Fixes #610.

    CLA Signed 
    opened by ChrisCummins 12
  • Add support for building CompilerGym using CMake

    Add support for building CompilerGym using CMake

    🚀 Feature

    This is a tracking issue for the progress that @KyleHerndon and @sogartar have made on adding support for building CompilerGym using CMake.

    The initial discussions and plan were outlined in the comments of this PR #478.

    Progress:

    • [x] Support for building the CompilerGym wheel on Linux (#498)
    • [ ] Support for building the CompilerGym wheel on macOS
    • [ ] Support for building examples/
    • [ ] Support for building benchmarks/
    • [ ] Resolve the TODO(github.com/facebookresearch/CompilerGym/issues/506) issues in the source.
    • [ ] Speed up the build, especially the CI job (#595)
    • [ ] Rewrite the CMakeLists files to use more idiomatic CMake style
    Enhancement Testing & Tooling 
    opened by ChrisCummins 11
  • Separate the implementation of RPC service and compilation session

    Separate the implementation of RPC service and compilation session

    🚀 Feature

    Make it easier for users to add support for new compilers by providing base classes that allow users to "fill in the blanks" for interacting with their compiler, without having to implement all of the other RPC interface boilerplate.

    Motivation

    Top add support for a new compiler, a user must write a compiler service that performs two key functions:

    1. RPC Server: Implements a server that listens on a port for RPC requests that are then dispatched to compilation sessions.
    2. Compilation session: Implements the compilation session interface to do the actual task of applying actions and computing observations.

    This division of roles is obvious in the example compiler services. In the python example, the RPC server is implemented here and the compilation session here. The compilation session is where the actual interesting work gets done. The RPC server is boilerplate, and as can be seen in the example, is not an insubstantial amount of code (and must be implemented carefully to avoid concurrency bugs / performance issues). Given that the behavior of the RPC Server is the same across all compiler services, we should provide a builtin implementation so that the user can focus only on implementing the compilation session.

    Pitch

    Provide helper code for C++ and Python that makes it easier to add support for new compilers.

    Python

    Add a base class that represents a single interactive session with a compiler:

    class CompilationSession:
        """Base class that represents an interactive compilation session."""
    
        compiler_version: str = ""  # compiler version
        action_spaces: List[ActionSpace] = []  # what your compiler can do
        observation_spaces: List[ObservationSpace] = []  # what features you provide
    
        def __init__(self, working_dir: Path, action_space_index: int, benchmark: Benchmark):
            """start a new compilation session with the given action space & benchmark"""
    
        def apply_action(self, action_index: int) -> Tuple[bool, Optional[ActionSpace], bool]:
            """apply an action. Returns a tuple: (end_of_session, new_action_space, action_had_no_effect).
            """
    
        def get_observation(self, observation_space_index: int) -> Observation:
            """compute an observation"""
    
        def fork(self) -> CompilationSession:
            """optional. Create a copy of current session state"""
    

    To add support for a new compiler, a user subclasses from this, provides implementations for the abstract methods, and then calls a helper function that will create and launch an RPC service using this class:

    from compiler_gym.service import CompilationSession
    from compiler_gym.service.runtime import create_and_run_compiler_gym_service
    
    class MyDerivedCompilationSession(CompilationSession):
        ...
    
    if __name__ == "__main__":
        create_and_run_compiler_gym_service(CompilationSession)
    

    The create_and_run_compiler_gym_service() function here performs all of the boilerplate of command line argument parsing, writing to the correct temporary files, handling RPC requests etc.

    C++

    Very similar to the python interface, only using out-params:

    class CompilationSession {
     public:
      virtual string getCompilerVersion() const;
      virtual vector<ActionSpace> getActionSpaces() const = 0;  // what your compiler can do
      virtual vector<ObservationSpace> getObservationSpaces() const = 0;  // features you provide
    
      [[nodiscard]] virtual Status init(size_t actionSpaceIndex, const Benchmark& benchmark) = 0;
    
      // apply an action
      [[nodiscard]] virtual Status applyAction(size_t actionIndex, bool* endOfEpisode,
                                                     bool* actionSpaceChanged,
                                                     bool* actionHadNoEffect) = 0;
    
      // compute an observation
      [[nodiscard]] virtual Status setObservation(size_t observationSpaceIndex,
                                                        Observation* observation) = 0;
    
      // Optional. Called after all actions / observations in a single step.
      [[nodiscard]] virtual Status endOfActions(bool* endOfEpisode, bool* actionSpaceChanged);
    
      // Optional. Initialize state from another session.
      [[nodiscard]] virtual Status init(CompilationSession* other);
    };
    

    and to use it:

    #include "compiler_gym/service/CompilationSession.h"
    #include "compiler_gym/service/runtime/Runtime.h"
    
    using namespace compiler_gym;
    
    namespace {
    
    class MyCompilationSession final : public CompilationSession {
     public:
      using CompilationSession::CompilationSession;
    
      ...
    };
    
    }  // namespace
    
    int main(int argc, char** argv) {
      createAndRunCompilerGymService<MyCompilationSession>(argc, argv, "My service");
    }
    
    Enhancement 
    opened by ChrisCummins 10
  • Can we identify node_id between current observation and next observation w.r.t Programl?

    Can we identify node_id between current observation and next observation w.r.t Programl?

    ❓ Questions and Help

    Observation space is Programl,

    Let $G, G'|a$ current observation, next observation given some action

    $G$ was 'pruned' by $a$, so that $|G| > |G'|$

    some nodes are gone, others remain

    Can we map remaining node ids to next observation's node ids?

    It seems, looking at Programl-generating part, maybe we should postprocess somehow to match node ids could you give me a little hint for that?

    Cheers! Anthony

    Additional Context

    Question 
    opened by anthony0727 0
  • All CI jobs failing with

    All CI jobs failing with "Package clang-9 is not available"

    🐛 Bug

    Error:

    Package clang-9 is not available, but is referred to by another package.
    This may mean that the package is missing, has been obsoleted, or
    
    Bug 
    opened by ChrisCummins 0
  • Discussion about future plans after deprecation of OpenAI Gym

    Discussion about future plans after deprecation of OpenAI Gym

    ❓ Questions and Help

    Should we decouple CompilerGym from (past) openai/gym?

    The team that has been maintaining Gym since 2021 has moved all future development to Gymnasium, a drop in replacement for Gym (import gymnasium as gym), and this repo isn't planned to receive any future updates. Please consider switching over to Gymnasium as you're able to do so. https://github.com/openai/gym#important-notice

    Additional Context

    I assume you are already aware of it, gym's maintainer has been making breaking API changes.

    def step(self, action: ActType) -> Tuple[ObsType, float, bool, bool, dict]:
    

    https://github.com/Farama-Foundation/Gymnasium/blob/1956e648f951eb64c45440997f8fe484ef3c7138/gymnasium/core.py#L71

    I reckon the changes are quite reasonable, but previous API(returning observation, reward, done, info) served as quite rigorous protocol for reinforcement learning.

    Is there already on-going discussion regarding this?

    TL;DR Most inheritance to CompilerGym from openai/gym was for typing, rather than it's functionality - openai/gym doesn't really have much functionality itself. So we can easily define our own Space, Env, etc

    However, if we decouple, many RL libraries depend on openai/gym and might conflict ray[rllib] do strong type checking such as isinstance(obs_space, gym.space.Space)

    I hope we(CompilerGym users) do discussion here to help FAIR team.

    Question 
    opened by anthony0727 5
  • [llvm] Adding missing global initializers when splitting a benchmark

    [llvm] Adding missing global initializers when splitting a benchmark

    • Modify split_benchmark_by_function() so that it returns an extra "benchmark", which is a single module containing no executable instructions, but all global variable definitions. This is needed to make the result of merging the split benchmarks compilable.
    • Fig a buf in split_benchmark_by_function() whereby the definitions for functions marked with available_externally linkage were dropped.
    • Identify two passes, -strip and -strip-nondebug, which are unsafe to run before split+merge.
    • Add end-to-end tests that benchmark semantics are unchanged by splitting and merging.

    Follow up to #772.

    CLA Signed LLVM 
    opened by ChrisCummins 1
  • [www] Add support for user-provided benchmark source.

    [www] Add support for user-provided benchmark source.

    This adds a benchmark_source attribute to the step API that enables users to provide their own code to use as a benchmark. To use your own benchmark source, you inline the contents of your source file into the benchmark_source attribute and set the benchmark attribute to the name of the local file:

    {
      "benchmark": "/tmp/foo.c",
      "benchmark_source": "int A() {\n  return 1;\n}",
       ...
    }
    

    The file name is important because we use the file extension to determine how to process it (e.g. .c files are compiled, .ll files are interpreted as bytecode, etc).

    Here are two example files for testing.

    The first is example1.c:

    int A() {
      return 0;
    }
    

    The second is example2.ll:

    ; ModuleID = 'example1.c'
    source_filename = "example1.c"
    target datalayout = "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128"
    target triple = "x86_64-unknown-linux-gnu"
    
    ; Function Attrs: noinline nounwind optnone uwtable
    define dso_local i32 @A() #0 {
      ret i32 0
    }
    
    attributes #0 = { noinline nounwind optnone uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "frame-pointer"="all" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" }
    
    !llvm.module.flags = !{!0}
    !llvm.ident = !{!1}
    
    !0 = !{i32 1, !"wchar_size", i32 4}
    !1 = !{!"clang version 10.0.0 "}
    
    CLA Signed 
    opened by ChrisCummins 4
Releases(v0.2.5)
  • v0.2.5(Nov 2, 2022)

    Release 0.2.5 (2022-11-01)

    CompilerGym v0.2.5 adds a new LLVM dataset, two new observation spaces, and includes numerous updates and bug fixes.

    Summary of Changes

    • [llvm] Added two new observation spaces, LexedIr and LexedIrTuple, providing access to a sequence of IR tokens (#742, thanks @fivosts!).
    • [llvm] Added the "Jotaibench" benchmark suite, providing 18,761 new executable C programs extracted from handwritten code on GitHub (#705, thanks @canesche!).
    • Added support for Python 3.10.
    • [llvm] Fixed a bug with non-terminating subprocesses (#741, thanks @thecoblack!).
    • [llvm] Fixed a bug where the incorrect number of runtimes were reported by reset() (#761), and an incorrect number of warm up runs were being performed (#717, thanks @lqwk!).
    • [llvm] New leaderboard submission using GATv2 and DD-PPO (#728, thanks @anthony0727!).
    • Added the ability to set timeout on each of the individual environment operations (#716, thanks @ricardoprins!).
    • Added support for loading URLs in CompilerEnvStateReader.read_paths() (#692, thanks @thecoblack!).
    • Simplified Makefile rules: renamed install-test to test and deprecated bazel test rules.
    • Fixed a bug where the TimeLimit wrapper would interfere with benchmark iterator wrappers (#739, thanks @nluu175!).
    • [ci] Added CI test coverage of example services (#695, #642, #699, thanks @mostafaelhoushi!).
    • [ci] Updated Github actions to use Node v16.
    • Reduced verbosity and wall time of CMake builds.
    • Updates and fixes dependent package conflicts (fixes #771, #768).

    Credits

    A huge thank you to all code contributors!

    • @anthony0727
    • @canesche made their first contribution in #705
    • @fivosts made their first contribution in #742
    • @jaopaulolc made their first contribution in #738
    • @lqwk made their first contribution in #717
    • @mostafaelhoushi
    • @nluu175 made their first contribution in #739
    • @ricardoprins made their first contribution in #716
    • @ryanrussell made their first contribution in #755
    • @sahirgomez1
    • @thecoblack
    • @youweiliang made their first contribution in #751

    Full Changelog: v0.2.4...v0.2.5

    Source code(tar.gz)
    Source code(zip)
    compiler_gym-0.2.5-py3-none-macosx_10_15_x86_64.whl(29.71 MB)
    compiler_gym-0.2.5-py3-none-manylinux2014_x86_64.whl(28.18 MB)
  • v0.2.4(May 25, 2022)

    This release adds a new compiler environment, new APIs, and a suite of backend improvements to improve the flexibility of CompilerGym environments. Many thanks to code contributors: @sogartar, @KyleHerndon, @SoumyajitKarmakar, @uduse, and @anthony0727!

    Highlights of this release include:

    • [mlir] Began work on a new environment for matrix multiplication using MLIR (#652, thanks @KyleHerndon and @sogartar!). Note this environment is not yet included in the pypi package and must be compiled from source.
    • [llvm] Added a new env.benchmark_from_clang_invocation() method (#577) that can be used for constructing LLVM environment automatically from C/C++ compiler invocations. This makes it much easier to integrate CompilerGym with your existing build scripts.
    • Added three new wrapper classes: Counter, that provides op counts for analysis (#683); SynchronousSqliteLogger, that provides logging of environment interactions to a relational database (#679), and ForkOnStep that provides an undo() operation (#682).
    • Added reward_space and observation_space parameters to env.reset() (#659, thanks @SoumyajitKarmakar!)

    This release includes a number of improvements to the backend APIs that make it easier to write new CompilerGym environments:

    • Refactored the backend to make CompilerEnv an abstract interface, and ClientServiceCompilerEnv the concrete implementation of this interface. This enables new environments to be implemented without using gRPC (#633, thanks @sogartar!).
    • Extended the support for different types of action and observation spaces (#641, #643, thanks @sogartar!), including new Permutation and SpaceSequence spaces (#645, thanks @sogartar!)..
    • Added a new disk/ subdirectory to compiler service's working directories, which is symlinked to an on-disk location for devices which support in-memory working directories. This fixes a bug with leftover temporary directories from LLVM (#672).

    This release also includes numerous bug fixes and improvements, many of which were reported or fixed by the community. For example, fixing a bug in cache file locations (#656, thanks @uduse!), and a missing flag definition in example code (#684, thanks @anthony0727!).

    Source code(tar.gz)
    Source code(zip)
    compiler_gym-0.2.4-py3-none-macosx_10_15_x86_64.whl(29.64 MB)
    compiler_gym-0.2.4-py3-none-manylinux2014_x86_64.whl(28.10 MB)
  • v0.2.3(Mar 18, 2022)

    This release brings in deprecating changes to the core env.step() routine, and lays the groundwork for enabling new types of compiler optimizations to be exposed through CompilerGym. Many thanks to code contributors: @mostafaelhoushi, @sogartar, @KyleHerndon, @uduse, @parthchadha, and @xtremey!

    Highlights of this release include:

    • Added a new TextSizeInBytes observation space for LLVM (#575).
    • Added a new PPO leaderboard entry (#580). Thanks @xtremey!
    • Fixed a bug in which temporary directories created by the LLVM environment were not cleaned up (#592).
    • [Backend] The function createAndRunCompilerGymService now returns an int, which is the exit return code (#592).
    • Improvements to the examples documentation (#548) and FAQ (#586)

    Deprecations and breaking changes:

    • CompilerEnv.step no longer accepts a list of actions (#627). A new method, CompilerEnv.multistep provides this functionality. This is to provide compatibility with environments whose action spaces are lists. To update your code, replace any calls to env.step() which take a list of actions to use env.multistep(). Thanks @sogartar!
    • The arguments observations and rewards to step() have been renamed observation_spaces and reward_spaces, respectively (#627).
    • Reward.id has been renamed Reward.name (#565, #612). Thanks @parthchadha!
    • The backend protocol buffer schema has been updated to natively support more types of observation and action, and to support nested spaces (#531). Thanks @sogartar!

    Full Changelog: https://github.com/facebookresearch/CompilerGym/compare/v0.2.2...v0.2.3

    Source code(tar.gz)
    Source code(zip)
    compiler_gym-0.2.3-py3-none-macosx_10_15_x86_64.whl(29.01 MB)
    compiler_gym-0.2.3-py3-none-manylinux2014_x86_64.whl(27.54 MB)
  • v0.2.2(Jan 21, 2022)

    Amongst the highlights of this release are support for building with CMake and a new compiler environment based on loop unrolling. Many thanks to @sogartar, @mostafaelhoushi, @KyleHerndon, and @yqtianust for code contributions!

    • Added support for building CompilerGym from source on Linux using CMake (#498, #478). The new build system coexists with the bazel build and enables customization over the CMake configuration used to build the LLVM environment. See INSTALL.md for details. Credit: @sogartar, @KyleHerndon.
    • Added an environment for loop optimizations in LLVM (#530, #529, #517). This new example environment provides control over loop unrolling factors and demonstrates how to build a standalone LLVM binary using the new CMake build system. Credit: @mostafaelhoushi.
    • Added a new BenchmarkUri class and API for parsing URIs (#525). This enables benchmarks to have optional parameters that can be used by the backend services to modify their behavior.
    • [llvm] Enabled runtime reward to be calculated on systems where /dev/shm does not permit executables (#510).
    • [llvm] Added a new benchmark://mibench-v1 dataset and deprecated benchmark://mibench-v0 (#511). If you are using mibench-v0, please update to the new version.
    • [llvm] Enabled all 20 of the cBench runtime datasets to be used by the benchmark://cbench-v1 dataset (#525).
    • Made the site_data_base argument of the Dataset class constructor optional (#518).
    • Added support for building CompilerGym from source on macOS Monterey (#494).
    • Removed the legacy dataset scripts and APIs that were deprecated in v0.1.8. Please use the new dataset API. The following has been removed:
      • The compiler_gym.bin.datasets script.
      • The properties: CompilerEnv.available_datasets, and CompilerEnv.benchmarks.
      • The CompilerEnv.require_dataset(), CompilerEnv.require_datasets(), CompilerEnv.register_dataset(), and CompilerEnv.get_benchmark_validation_callback() methods.
    • Numerous other bug fixes and improvements.

    Full Change Log: v0.2.1...v0.2.2

    Source code(tar.gz)
    Source code(zip)
    compiler_gym-0.2.2-py3-none-macosx_10_15_x86_64.whl(37.65 MB)
    compiler_gym-0.2.2-py3-none-manylinux2014_x86_64.whl(35.59 MB)
  • v0.2.1(Nov 18, 2021)

    Highlights of this release include:

    • [Complex and composite action spaces] Added a new schema for describing action spaces (#369). This complete overhaul enables a much richer set of actions to be exposed, such as composite actions spaces, dictionaries, and continuous actions.
    • [State Transition Dataset] We have released the first iteration of the state transition dataset, a large collection of (state,action,reward) tuples for the LLVM environments, suitable for large-scale supervised learning. We have added an example learned cost model using a graph neural network in examples/gnn_cost_model (#484, thanks @bcui19!).
    • [New examples] We have added several new examples to the examples/ directory, including a new loop unrolling demo based on LLVM (#477, thanks @mostafaelhoushi!), a loop tool demo (#457, thanks @bwasti!), micro-benchmarks for operations, and example reinforcement learning scripts (#484). See examples/README.md for details. We also overhauled the example compiler gym service (#467).
    • [New logo] Thanks Christy for designing a great new logo for CompilerGym! (#471)
    • [llvm] Added a new Bitcode observation space (#442).
    • Numerous bug fixes and improvements.

    Deprecations and breaking changes:

    • [Breaking change] Out-of-tree compiler services will require updating to the new action space API (#369).
    • The env.observation.add_derived_space() method has been deprecated and will be removed in a future release. Please use the new derived_observation_spaces argument to the CompilerEnv constructor (#463).
    • The compiler_gym.utils.logs module has been deprecated. Use compiler_gym.utils.runfiles_path instead (#453).
    • The compiler_gym.replay_search module has been deprecated and merged into the compiler_gym.random_search (#453).

    Full Changelog: https://github.com/facebookresearch/CompilerGym/compare/v0.2.0...v0.2.1

    Source code(tar.gz)
    Source code(zip)
    compiler_gym-0.2.1-py3-none-macosx_10_14_x86_64.whl(37.70 MB)
    compiler_gym-0.2.1-py3-none-manylinux2014_x86_64.whl(35.62 MB)
  • v0.2.0(Sep 29, 2021)

    This release adds two new compiler optimization problems to CompilerGym: GCC command line flag optimization and CUDA loop nest optimization.

    • [GCC] A new gcc-v0 environment, authored by @hughleat, exposes the command line flags of GCC as a reinforcement learning environment. GCC is a production-grade compiler for C and C++ used throughout industry. The environment provides several datasets and a large, high dimensional action space that works on several GCC versions. For further details check out the reference documentation.
    • [loop_tool] A new loop_tool-v0 environment, authored by @bwasti, provides an experimental intermediate representation of n-dimensional data computation that can be lowered to both CPU and GPU backends. This provides a reinforcement learning environment for manipulating nests of loop computations to maximize throughput. For further details check out the reference documentation.

    Other highlights of this release include:

    • [Docker] Published a chriscummins/compiler_gym docker image that can be used to run CompilerGym services in standalone isolated containers (#424).
    • [LLVM] Fixed a bug in the experimental Runtime observation space that caused observations to slow down over time (#398).
    • [LLVM] Added a new utility module to compute observations from bitcodes (#405).
    • Overhauled the continuous integration services to reduce computational requirements by 59.4% while increasing test coverage (#392).
    • Improved error reporting if computing an observation fails (#380).
    • Changed the return type of compiler_gym.random_search() to a CompilerEnv (#387).
    • Numerous other bug fixes and improvements.

    Many thanks to code contributors: @thecoblack, @bwasti, @hughleat, and @sahirgomez1!

    Source code(tar.gz)
    Source code(zip)
    compiler_gym-0.2.0-py3-none-macosx_10_14_x86_64.whl(37.66 MB)
    compiler_gym-0.2.0-py3-none-manylinux2014_x86_64.whl(35.59 MB)
  • v0.1.10(Sep 8, 2021)

    This release lays the foundation for several new exciting additions to CompilerGym:

    • [LLVM] Added experimental support for optimizing for runtime and compile time (#307). This is still proof of concept and is not yet stable. For now, only the benchmark://cbench-v1 and generator://csmith-v0 datasets are supported.
    • [CompilerGym Explorer] Started development of a web frontend for the LLVM environments. The work-in-progress Flask API and React website can be found in the www directory.
    • [New Backend API] Added a mechanism for sending arbitrary data payloads to the compiler service backends (#313). This allows ad-hoc parameters that do not conform to the usual action space to be set for the duration of an episode. Add support for these parameters in the backend by implementing the optional handle_session_parameter() method, and then send parameters using the send_params() method.

    Other highlights of this release include:

    • [LLVM] The Csmith program generator is now shipped as part of the CompilerGym binary release, removing the need to compile it locally (#348).
    • [LLVM] A new ProgramlJson observation space provides the JSON node-link data of a ProGraML graph without parsing to a nx.MultiDiGraph (#332).
    • [LLVM] Added a leaderboard submission for a DQN agent (#292, thanks @phesse001!).
    • [Backend API Update] The Reward.reset() method now receives an observation view that can be used to compute initial states (#341, thanks @bwasti!).
    • [Datasets API] The size of infinite datasets has been changed from float("inf") to 0 (#347). This is a compatibility fix for __len__() which requires integers values.
    • Prevent excessive growth of in-memory caches (#299).
    • Multiple compatibility fixes for compiler_gym.wrappers.
    • Numerous other bug fixes and improvements.
    Source code(tar.gz)
    Source code(zip)
    compiler_gym-0.1.10-py3-none-macosx_10_9_x86_64.whl(30.44 MB)
    compiler_gym-0.1.10-py3-none-manylinux2014_x86_64.whl(29.87 MB)
  • v0.1.9(Jun 4, 2021)

    This release of CompilerGym focuses on backend extensibility and adds a bunch of new features to make it easier to add support for new compilers:

    • Adds a new CompilationSession class encapsulates a single incremental compilation session (#261).
    • Adds a common runtime for CompilerGym services that takes a CompilationSession subclass and handles all the RPC wrangling for you (#270).
    • Ports the LLVM service and example services to the new runtime (#277). This provides a net performance win with fewer lines of code.

    Other highlights of this release include:

    • [Core API] Adds a new compiler_gym.wrappers module that makes it easy to apply modular transformations to CompilerGym environments without modifying the environment code (#272).
    • [Core API] Adds a new Datasets.random_benchmark() method for selecting a uniform random benchmark from one or more datasets (#247).
    • [Core API] Adds a new compiler_gym.make() function, equivalent to gym.make() (#257).
    • [LLVM] Adds a new IrSha1 observation space that uses a fast, service-side C++ implementation to compute a checksum of the environment state (#267).
    • [LLVM] Adds 12 new C programs from the CHStone benchmark suite (#284).
    • [LLVM] Adds the anghabench-v1 dataset and deprecated anghabench-v0 (#242).
    • Numerous bug fixes and improvements.
    Source code(tar.gz)
    Source code(zip)
    compiler_gym-0.1.9-py3-none-macosx_10_9_x86_64.whl(29.96 MB)
    compiler_gym-0.1.9-py3-none-manylinux2014_x86_64.whl(29.46 MB)
  • v0.1.8(Apr 30, 2021)

    This release introduces some significant changes to the way that benchmarks are managed, introducing a new dataset API. This enabled us to add support for millions of new benchmarks and a more efficient implementation for the LLVM environment, but this will require some migrating of old code to the new interfaces (see "Migration Checklist" below).

    Highlights

    Some of the key changes of this release are:

    • [Core API change] We have added a Python Benchmark class (#190). The env.benchmark attribute is now an instance of this class rather than a string (#222).
    • [Core behavior change] Environments will no longer select benchmarks randomly. Now env.reset() will now always select the last-used benchmark, unless the benchmark argument is provided or env.benchmark has been set. If no benchmark is specified, a default is used.
    • [API deprecations] We have added a new Dataset class hierarchy (#191, #192). All datasets are now available without needing to be downloaded first, and a new Datasets class can be used to iterate over them (#200). We have deprecated the old dataset management operations, the compiler_gym.bin.datasets script, and removed the --dataset and --ls_benchmark flags from the command line tools.
    • [RPC interface change] The StartSession RPC endpoint now accepts a list of initial observations to compute. This removes the need for an immediate call to Step, reducing environment reset time by 15-21% (#189).
    • [LLVM] We have added several new datasets of benchmarks, including the Csmith and llvm-stress program generators (#207), a dataset of OpenCL kernels (#208), and a dataset of compilable C functions (#210). See the docs for an overview.
    • CompilerEnv now takes an optional Logger instance at construction time for fine-grained control over logging output (#187).
    • [LLVM] The ModuleID and source_filename of LLVM-IR modules are now anonymized to prevent unintentional overfitting to benchmarks by name (#171).
    • [docs] We have added a Feature Stability section to the documentation (#196).
    • Numerous bug fixes and improvements.

    Migration Checklist

    Please use this checklist when updating code for the previous CompilerGym release:

    • [ ] Review code that accesses the env.benchmark property and update to env.benchmark.uri if a string name is required. Setting this attribute by string (env.benchmark = "benchmark://a-v0/b") and comparison to string types (env.benchmark == "benchmark://a-v0/b") still work.
    • [ ] Review code that calls env.reset() without first setting a benchmark. Previously, calling env.reset() would select a random benchmark. Now, env.reset() always selects the last used benchmark, or a predetermined default if none is specified.
    • [ ] Review code that relies on env.benchmark being None to select benchmarks randomly. Now, env.benchmark is always set to the previously used benchmark, or a predetermined default benchmark if none has been specified. Setting env.benchmark = None will raise an error. Select a benchmark randomly by sampling from the env.datasets.benchmark_uris() iterator.
    • [ ] Remove calls to env.require_dataset() and related operations. These are no longer required.
    • [ ] Remove accesses to env.benchmarks. An iterator over available benchmark URIs is now available at env.datasets.benchmark_uris(), but the list of URIs cannot be relied on to be fully enumerable (the LLVM environments have over 2^32 URIs).
    • [ ] Review code that accesses env.observation_space and update to env.observation_space_spec where necessary (#228).
    • [ ] Update compiler service implementations to support the updated RPC interface by removing the deprecated GetBenchmarks RPC endpoint and replacing it with Dataset classes. See the example service for details.
    • [ ] [LLVM] Update references to the poj104-v0 dataset to poj104-v1.
    • [ ] [LLVM] Update references to the cBench-v1 dataset to cbench-v1.
    Source code(tar.gz)
    Source code(zip)
    compiler_gym-0.1.8-py3-none-macosx_10_9_x86_64.whl(29.96 MB)
    compiler_gym-0.1.8-py3-none-manylinux2014_x86_64.whl(29.45 MB)
  • v0.1.7(Apr 2, 2021)

    This release introduces public leaderboards to track the performance of user-submitted algorithms on compiler optimization tasks.

    • Added a new compiler_gym.leaderboard package which contains utilities for preparing leaderboard submissions (#161).
    • Added a LLVM instruction count leaderboard and seeded it with a random search baseline (#117).
    • Added support for Python 3.9, extending the set of supported python versions to 3.6, 3.7, 3.8, and 3.9 (#160).
    • [llvm] Added a new InstCount observation space that contains the counts of each type of instruction (#159).

    Build dependencies update notice

    This release updates the required versions for a handful of build dependencies. If you are building from source and upgrading from an older version of CompilerGym, your build environment will need to be updated. The easiest way to do that is to remove your existing conda environment using conda remove --name compiler_gym --all and to repeat the steps in building from source.

    Source code(tar.gz)
    Source code(zip)
    compiler_gym-0.1.7-py3-none-macosx_10_9_x86_64.whl(29.94 MB)
    compiler_gym-0.1.7-py3-none-manylinux2014_x86_64.whl(29.43 MB)
  • v0.1.6(Mar 23, 2021)

    This release focuses on hardening the LLVM environments, providing improved semantics validation, and improving the datasets. Many thanks to @JD-at-work, @bwasti, and @mostafaelhoushi for code contributions.

    • [llvm] Added a new cBench-v1 dataset which changes the function attributes of the IR to permit inlining. cBench-v0 is deprecated and will be removed no earlier than v0.1.6.
    • [llvm] Removed 15 passes from the LLVM action space: -bounds-checking, -chr, -extract-blocks, -gvn-sink, -loop-extract-single, -loop-extract, -objc-arc-apelim, -objc-arc-contract, -objc-arc-expand, -objc-arc, -place-safepoints, -rewrite-symbols, -strip-dead-debug-info, -strip-nonlinetable-debuginfo, -structurizecfg. Passes are removed if they are: irrelevant (e.g. used only debugging), if they change the program semantics (e.g. inserting runtimes bound checking), or if they have been found to have nondeterministic behavior between runs.
    • Extended env.step() so that it can take a list of actions that are all performed in a single batch. This improve efficiency.
    • Added default reward spaces for CompilerEnv that are derived from scalar observations (thanks @bwasti!)
    • Added a new Q learning example (thanks @JD-at-work!).
    • Deprecation: The next release v0.1.7 will introduce a new datasets API that is easier to use and more flexible. In preparation for this, the Dataset class has been renamed to LegacyDataset, the following dataset operations have been marked deprecated: activate(), deactivate(), and delete(). The GetBenchmarks() RPC interface method has also been marked deprecated..
    • [llvm] Improved semantics validation using LLVM's memory, thread, address, and undefined behavior sanitizers.
    • Numerous bug fixes and improvements.
    Source code(tar.gz)
    Source code(zip)
    compiler_gym-0.1.6-py3-none-macosx_10_9_x86_64.whl(29.76 MB)
    compiler_gym-0.1.6-py3-none-manylinux2014_x86_64.whl(29.20 MB)
  • v0.1.3(Feb 26, 2021)

    This release adds numerous enhancements aimed at improving ease-of-use. Thanks to @broune, @hughleat, and @JD-ETH for contributions.

    • Added a new env.validate() API for validating the state of an environment. Added semantics validation for some LLVM benchmarks.
    • Added a env.fork() method to efficiently duplicate an environment state.
    • The manual_env environment has been improved with new features such as hill climbing search and tab completion.
    • Ease of use improvements for string observation space and reward space names: Added new getter methods such as env.observation.Autophase() and generated constants such as llvm.observation_spaces.autophase.
    • Breaking change: Calculation of environment reward has been moved to Python. Reward functions have been removed from backend service implementations and replaced with equivalent Python classes.
    • Various bug fixes and improvements.
    Source code(tar.gz)
    Source code(zip)
    compiler_gym-0.1.3-py3-none-macosx_10_9_x86_64.whl(86.54 MB)
    compiler_gym-0.1.3-py3-none-manylinux2014_x86_64.whl(90.32 MB)
  • v0.1.2(Jan 25, 2021)

    • Add a new compiler_gym.views.ObservationView.add_derived_space(...) API for constructing derived observation spaces.
    • Added default reward and observation values for env.step() in case of service failure.
    • Extended the public compiler_gym.datasets API for managing datasets.
    • [llvm] Adds -Norm-suffixed rewards that are normalized to unoptimized cost.
    • Extended documentation and example codes.
    • Numerous bug fixes and improvements.
    Source code(tar.gz)
    Source code(zip)
    compiler_gym-0.1.2-py3-none-macosx_10_9_x86_64.whl(65.31 MB)
    compiler_gym-0.1.2-py3-none-manylinux2014_x86_64.whl(67.52 MB)
  • v0.1.1(Dec 28, 2020)

    • Expose the package version through compiler_gym.__version__, and the compiler version through CompilerEnv.compiler_version.
    • Add a notebook version of the "Getting Started" guide that can be run in colab.
    • [llvm] Reformulate reward signals to be cumulative.
    • [llvm] Add a new reward signal based on the size of the .text section of compiled object files.
    • [llvm] Add a LlvmEnv.make_benchmark() API for easily constructing custom benchmarks for use in environments.
    • Numerous bug fixes and improvements.
    Source code(tar.gz)
    Source code(zip)
    compiler_gym-0.1.1-py3-none-macosx_10_9_x86_64.whl(64.13 MB)
    compiler_gym-0.1.1-py3-none-manylinux2014_x86_64.whl(66.33 MB)
Owner
Facebook Research
Facebook Research
Performant, differentiable reinforcement learning

deluca Performant, differentiable reinforcement learning Notes This is pre-alpha software and is undergoing a number of core changes. Updates to follo

Google 114 Dec 27, 2022
YOLTv4 builds upon YOLT and SIMRDWN, and updates these frameworks to use the most performant version of YOLO, YOLOv4

YOLTv4 builds upon YOLT and SIMRDWN, and updates these frameworks to use the most performant version of YOLO, YOLOv4. YOLTv4 is designed to detect objects in aerial or satellite imagery in arbitrarily large images that far exceed the ~600×600 pixel size typically ingested by deep learning object detection frameworks.

Adam Van Etten 161 Jan 6, 2023
PyBullet CartPole and Quadrotor environments—with CasADi symbolic a priori dynamics—for learning-based control and reinforcement learning

safe-control-gym Physics-based CartPole and Quadrotor Gym environments (using PyBullet) with symbolic a priori dynamics (using CasADi) for learning-ba

Dynamic Systems Lab 300 Dec 28, 2022
PyTorch implementations of deep reinforcement learning algorithms and environments

Deep Reinforcement Learning Algorithms with PyTorch This repository contains PyTorch implementations of deep reinforcement learning algorithms and env

Petros Christodoulou 4.7k Jan 4, 2023
Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments (CoRL 2020)

Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments [Project website] [Paper] This project is a PyTorch

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 49 Nov 28, 2022
Multi-objective gym environments for reinforcement learning.

MO-Gym: Multi-Objective Reinforcement Learning Environments Gym environments for multi-objective reinforcement learning (MORL). The environments follo

Lucas Alegre 74 Jan 3, 2023
Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"

This repository contains code for the following two papers: VisualBERT: A Simple and Performant Baseline for Vision and Language (arxiv) with a short

Natural Language Processing @UCLA 463 Dec 9, 2022
Neural Ensemble Search for Performant and Calibrated Predictions

Neural Ensemble Search Introduction This repo contains the code accompanying the paper: Neural Ensemble Search for Performant and Calibrated Predictio

AutoML-Freiburg-Hannover 26 Dec 12, 2022
Bagua is a flexible and performant distributed training algorithm development framework.

Bagua is a flexible and performant distributed training algorithm development framework.

null 786 Dec 17, 2022
Solving reinforcement learning tasks which require language and vision

Multimodal Reinforcement Learning JAX implementations of the following multimodal reinforcement learning approaches. Dual-coding Episodic Memory from

Henry Prior 31 Feb 26, 2022
Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX

CQL-JAX This repository implements Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX (FLAX). Implementation is built on

Karush Suri 8 Nov 7, 2022
Reinforcement-learning - Repository of the class assignment questions for the course on reinforcement learning

DSE 314/614: Reinforcement Learning This repository containing reinforcement lea

Manav Mishra 4 Apr 15, 2022
A Lighting Pytorch Framework for Recommendation System, Easy-to-use and Easy-to-extend.

Torch-RecHub A Lighting Pytorch Framework for Recommendation Models, Easy-to-use and Easy-to-extend. 安装 pip install torch-rechub 主要特性 scikit-learn风格易用

Mincai Lai 67 Jan 4, 2023
Fast image augmentation library and easy to use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

null 11.4k Jan 9, 2023
BasicRL: easy and fundamental codes for deep reinforcement learning。It is an improvement on rainbow-is-all-you-need and OpenAI Spinning Up.

BasicRL: easy and fundamental codes for deep reinforcement learning BasicRL is an improvement on rainbow-is-all-you-need and OpenAI Spinning Up. It is

RayYoh 12 Apr 28, 2022
this is a lite easy to use virtual keyboard project for anyone to use

virtual_Keyboard this is a lite easy to use virtual keyboard project for anyone to use motivation I made this for this year's recruitment for RobEn AA

Mohamed Emad 3 Oct 23, 2021
A collection of easy-to-use, ready-to-use, interesting deep neural network models

Interesting and reproducible research works should be conserved. This repository wraps a collection of deep neural network models into a simple and un

Aria Ghora Prabono 16 Jun 16, 2022
A toy compiler that can convert Python scripts to pickle bytecode 🥒

Pickora ?? A small compiler that can convert Python scripts to pickle bytecode. Requirements Python 3.8+ No third-party modules are required. Usage us

ꌗᖘ꒒ꀤ꓄꒒ꀤꈤꍟ 68 Jan 4, 2023
The repository contain code for building compiler using puthon.

Building Compiler This is a python implementation of JamieBuild's "Super Tiny Compiler" Overview JamieBuilds developed a wonderfully educative compile

Shyam Das Shrestha 1 Nov 21, 2021