Gin provides a lightweight configuration framework for Python

Overview

Gin Config

Authors: Dan Holtmann-Rice, Sergio Guadarrama, Nathan Silberman Contributors: Oscar Ramirez, Marek Fiser

Gin provides a lightweight configuration framework for Python, based on dependency injection. Functions or classes can be decorated with @gin.configurable, allowing default parameter values to be supplied from a config file (or passed via the command line) using a simple but powerful syntax. This removes the need to define and maintain configuration objects (e.g. protos), or write boilerplate parameter plumbing and factory code, while often dramatically expanding a project's flexibility and configurability.

Gin is particularly well suited for machine learning experiments (e.g. using TensorFlow), which tend to have many parameters, often nested in complex ways.

This is not an official Google product.

Table of Contents

[TOC]

Basic usage

This section provides a high-level overview of Gin's main features, ordered roughly from "basic" to "advanced". More details on these and other features can be found in the user guide.

1. Setup

Install Gin with pip:

pip install gin-config

Install Gin from source:

git clone https://github.com/google/gin-config
cd gin-config
python -m setup.py install

Import Gin (without TensorFlow functionality):

import gin

Import additional TensorFlow-specific functionality via the gin.tf module:

import gin.tf

Import additional PyTorch-specific functionality via the gin.torch module:

import gin.torch

2. Configuring default values with Gin (@gin.configurable and "bindings")

At its most basic, Gin can be seen as a way of providing or changing default values for function or constructor parameters. To make a function's parameters "configurable", Gin provides the gin.configurable decorator:

@gin.configurable
def dnn(inputs,
        num_outputs,
        layer_sizes=(512, 512),
        activation_fn=tf.nn.relu):
  ...

This decorator registers the dnn function with Gin, and automatically makes all of its parameters configurable. To set ("bind") a value for the layer_sizes parameter above within a ".gin" configuration file:

# Inside "config.gin"
dnn.layer_sizes = (1024, 512, 128)

Bindings have syntax function_name.parameter_name = value. All Python literal values are supported as value (numbers, strings, lists, tuples, dicts). Once the config file has been parsed by Gin, any future calls to dnn will use the Gin-specified value for layer_sizes (unless a value is explicitly provided by the caller).

Classes can also be marked as configurable, in which case the configuration applies to constructor parameters:

@gin.configurable
class DNN(object):
  # Constructor parameters become configurable.
  def __init__(self,
               num_outputs,
               layer_sizes=(512, 512),
               activation_fn=tf.nn.relu):
    ...

  def __call__(inputs):
    ...

Within a config file, the class name is used when binding values to constructor parameters:

# Inside "config.gin"
DNN.layer_sizes = (1024, 512, 128)

Finally, after defining or importing all configurable classes or functions, parse your config file to bind your configurations (to also permit multiple config files and command line overrides, see gin.parse_config_files_and_bindings):

gin.parse_config_file('config.gin')

Note that no other changes are required to the Python code, beyond adding the gin.configurable decorator and a call to one of Gin's parsing functions.

3. Passing functions, classes, and instances ("configurable references")

In addition to accepting Python literal values, Gin also supports passing other Gin-configurable functions or classes. In the example above, we might want to change the activation_fn parameter. If we have registered, say tf.nn.tanh with Gin (see registering external functions), we can pass it to activation_fn by referring to it as @tanh (or @tf.nn.tanh):

# Inside "config.gin"
dnn.activation_fn = @tf.nn.tanh

Gin refers to @name constructs as configurable references. Configurable references work for classes as well:

def train_fn(..., optimizer_cls, learning_rate):
  optimizer = optimizer_cls(learning_rate)
  ...

Then, within a config file:

# Inside "config.gin"
train_fn.optimizer_cls = @tf.train.GradientDescentOptimizer
...

Sometimes it is necessary to pass the result of calling a specific function or class constructor. Gin supports "evaluating" configurable references via the @name() syntax. For example, say we wanted to use the class form of DNN from above (which implements __call__ to "behave" like a function) in the following Python code:

def build_model(inputs, network_fn, ...):
  logits = network_fn(inputs)
  ...

We could pass an instance of the DNN class to the network_fn parameter:

# Inside "config.gin"
build_model.network_fn = @DNN()

To use evaluated references, all of the referenced function or class's parameters must be provided via Gin. The call to the function or constructor takes place just before the call to the function to which the result is passed, In the above example, this would be just before build_model is called.

The result is not cached, so a new DNN instance will be constructed for each call to build_model.

4. Configuring the same function in different ways ("scopes")

What happens if we want to configure the same function in different ways? For instance, imagine we're building a GAN, where we might have a "generator" network and a "discriminator" network. We'd like to use the dnn function above to construct both, but with different parameters:

def build_model(inputs, generator_network_fn, discriminator_network_fn, ...):
  ...

To handle this case, Gin provides "scopes", which provide a name for a specific set of bindings for a given function or class. In both bindings and references, the "scope name" precedes the function name, separated by a "/" (i.e., scope_name/function_name):

# Inside "config.gin"
build_model.generator_network_fn = @generator/dnn
build_model.discriminator_network_fn = @discriminator/dnn

generator/dnn.layer_sizes = (128, 256)
generator/dnn.num_outputs = 784

discriminator/dnn.layer_sizes = (512, 256)
discriminator/dnn.num_outputs = 1

dnn.activation_fn = @tf.nn.tanh

In this example, the generator network has increasing layer widths and 784 outputs, while the discriminator network has decreasing layer widths and 1 output.

Any parameters set on the "root" (unscoped) function name are inherited by scoped variants (unless explicitly overridden), so in the above example both the generator and the discriminator use the tf.nn.tanh activation function.

5. Full hierarchical configuration {#full-hierarchical}

The greatest degree of flexibility and configurability in a project is achieved by writing small modular functions and "wiring them up" hierarchically via (possibly scoped) references. For example, this code sketches a generic training setup that could be used with the tf.estimator.Estimator API:

@gin.configurable
def build_model_fn(network_fn, loss_fn, optimize_loss_fn):
  def model_fn(features, labels):
    logits = network_fn(features)
    loss = loss_fn(labels, logits)
    train_op = optimize_loss_fn(loss)
    ...
  return model_fn

@gin.configurable
def optimize_loss(loss, optimizer_cls, learning_rate):
  optimizer = optimizer_cls(learning_rate)
  return optimizer.minimize(loss)

@gin.configurable
def input_fn(file_pattern, batch_size, ...):
  ...

@gin.configurable
def run_training(train_input_fn, eval_input_fn, estimator, steps=1000):
  estimator.train(train_input_fn, steps=steps)
  estimator.evaluate(eval_input_fn)
  ...

In conjunction with suitable external configurables to register TensorFlow functions/classes (e.g., Estimator and various optimizers), this could be configured as follows:

# Inside "config.gin"
run_training.train_input_fn = @train/input_fn
run_training.eval_input_fn = @eval/input_fn

input_fn.batch_size = 64  # Shared by both train and eval...
train/input_fn.file_pattern = ...
eval/input_fn.file_pattern = ...


run_training.estimator = @tf.estimator.Estimator()
tf.estimator.Estimator.model_fn = @build_model_fn()

build_model_fn.network_fn = @dnn
dnn.layer_sizes = (1024, 512, 256)

build_model_fn.loss_fn = @tf.losses.sparse_softmax_cross_entropy

build_model_fn.optimize_loss_fn = @optimize_loss

optimize_loss.optimizer_cls = @tf.train.MomentumOptimizer
MomentumOptimizer.momentum = 0.9

optimize_loss.learning_rate = 0.01

Note that it is straightforward to switch between different network functions, optimizers, datasets, loss functions, etc. via different config files.

6. Additional features

Additional features described in more detail in the user guide include:

Best practices

At a high level, we recommend using the minimal feature set required to achieve your project's desired degree of configurability. Many projects may only require the features outlined in sections 2 or 3 above. Extreme configurability comes at some cost to understandability, and the tradeoff should be carefully evaluated for a given project.

Gin is still in alpha development and some corner-case behaviors may be changed in backwards-incompatible ways. We recommend the following best practices:

  • Minimize use of evaluated configurable references (@name()), especially when combined with macros (where the fact that the value is not cached may be surprising to new users).
  • Avoid nesting of scopes (i.e., scope1/scope2/function_name). While supported there is some ongoing debate around ordering and behavior.
  • When passing an unscoped reference (@name) as a parameter of a scoped function (some_scope/fn.param), the unscoped reference gets called in the scope of the function it is passed to... but don't rely on this behavior.
  • Wherever possible, prefer to use a function or class's name as its configurable name, instead of overriding it. In case of naming collisions, use module names (which are encouraged to be renamed to match common usage) for disambiguation.
  • In fact, to aid readability for complex config files, we gently suggest always including module names to help make it easier to find corresponding definitions in Python code.
  • When doing "full hierarchical configuration", structure the code to minimize the number of "top-level" functions that are configured without themselves being passed as parameters. In other words, the configuration tree should have only one root.

In short, use Gin responsibly :)

Syntax quick reference

A quick reference for syntax unique to Gin (which otherwise supports non-control-flow Python syntax, including literal values and line continuations). Note that where function and class names are used, these may include a dotted module name prefix (some.module.function_name).

Syntax Description
@gin.configurable Decorator in Python code that registers a function or class with Gin, wrapping/replacing it with a "configurable" version that respects Gin parameter overrides. A function or class annotated with `@gin.configurable` will have its parameters overridden by any provided configs even when called directly from other Python code. .
@gin.register Decorator in Python code that only registers a function or class with Gin, but does *not* replace it with its "configurable" version. Functions or classes annotated with `@gin.register` will *not* have their parameters overridden by Gin configs when called directly from other Python code. However, any references in config strings or files to these functions (`@some_name` syntax, see below) will apply any provided configuration.
name.param = value Basic syntax of a Gin binding. Once this is parsed, when the function or class named name is called, it will receive value as the value for param, unless a value is explicitly supplied by the caller. Any Python literal may be supplied as value.
@some_name A reference to another function or class named some_name. This may be given as the value of a binding, to supply function- or class-valued parameters.
@some_name() An evaluated reference. Instead of supplying the function or class directly, the result of calling some_name is passed instead. Note that the result is not cached; it is recomputed each time it is required.
scope/name.param = value A scoped binding. The binding is only active when name is called within scope scope.
@scope/some_name A scoped reference. When this is called, the call will be within scope scope, applying any relevant scoped bindings.
MACRO_NAME = value A macro. This provides a shorthand name for the expression on the right hand side.
%MACRO_NAME A reference to the macro MACRO_NAME. This has the effect of textually replacing %MACRO_NAME with whatever expression it was associated with. Note in particular that the result of evaluated references are not cached.
Comments
  • Introduce picklable configurables

    Introduce picklable configurables

    What is the subject of this PR?

    I was trying to implement a mechanism that allows the external configurables and scoped configurables get pickled.

    Resolves #31

    Details

    Since the only thing that we need is the decorated __new__ or __init__, I thought that we could replace the instance that is actually returned to the instance of original class. This instance gets initialized with the decorated function. Through this we get:

    • picklability of all configurable versions of picklable types (regardless it is scoped, external or plain configurable)
    • more natural and intuitive inheritance: isinstance(subclass_instance, type(superclass_instance)) evaluates to True even if superclass is an external configurable

    Remarks

    ✅ I signed the CLA.

    cla: yes 
    opened by gmrukwa 24
  • AttributeError: module 'tensorflow._api.v1.io' has no attribute 'gfile'

    AttributeError: module 'tensorflow._api.v1.io' has no attribute 'gfile'

    Problem Definition

    I receive the following error when running import gin.tf: AttributeError: module 'tensorflow._api.v1.io' has no attribute 'gfile'

    Install Process

    I installed gin-config via pip inside an existing Anaconda environment: pip install gin-config

    Traceback

    Python 3.6.7 |Anaconda custom (64-bit)| (default, Dec 10 2018, 20:35:02) [MSC v.1915 64 bit (AMD64)] on win32
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import gym.tf
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    ModuleNotFoundError: No module named 'gym.tf'
    >>> import gin.tf
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "C:\Users\18048\AppData\Local\Continuum\anaconda3\envs\base_3_6\lib\site-packages\gin_config-0.1.3-py3.6.egg\gin\tf\__init__.py", line 20, in <module>
        from gin.tf.utils import GinConfigSaverHook
      File "C:\Users\18048\AppData\Local\Continuum\anaconda3\envs\base_3_6\lib\site-packages\gin_config-0.1.3-py3.6.egg\gin\tf\utils.py", line 34, in <module>
        config.register_file_reader(tf.io.gfile.GFile, tf.io.gfile.exists)
    AttributeError: module 'tensorflow._api.v1.io' has no attribute 'gfile'
    

    Versions

    tensorflow                1.12.0
    gin-config                0.1.3               
    

    Any assistance is greatly appreciated!

    opened by tanh314 11
  • Installation problems due to google_type_annotations

    Installation problems due to google_type_annotations

    I'm running python 3.7.5, on Ubuntu 18.04. While installing gin-config, I ran into the following error:

    running install
    running bdist_egg
    running egg_info
    writing gin_config.egg-info/PKG-INFO
    writing dependency_links to gin_config.egg-info/dependency_links.txt
    writing requirements to gin_config.egg-info/requires.txt
    writing top-level names to gin_config.egg-info/top_level.txt
    reading manifest file 'gin_config.egg-info/SOURCES.txt'
    writing manifest file 'gin_config.egg-info/SOURCES.txt'
    installing library code to build/bdist.linux-x86_64/egg
    running install_lib
    running build_py
    creating build/bdist.linux-x86_64/egg
    creating build/bdist.linux-x86_64/egg/gin
    copying build/lib/gin/config_parser.py -> build/bdist.linux-x86_64/egg/gin
    copying build/lib/gin/resource_reader.py -> build/bdist.linux-x86_64/egg/gin
    creating build/bdist.linux-x86_64/egg/gin/tf
    copying build/lib/gin/tf/external_configurables.py -> build/bdist.linux-x86_64/egg/gin/tf
    copying build/lib/gin/tf/__init__.py -> build/bdist.linux-x86_64/egg/gin/tf
    copying build/lib/gin/tf/utils.py -> build/bdist.linux-x86_64/egg/gin/tf
    copying build/lib/gin/__init__.py -> build/bdist.linux-x86_64/egg/gin
    copying build/lib/gin/utils.py -> build/bdist.linux-x86_64/egg/gin
    copying build/lib/gin/config.py -> build/bdist.linux-x86_64/egg/gin
    creating build/bdist.linux-x86_64/egg/gin/torch
    copying build/lib/gin/torch/external_configurables.py -> build/bdist.linux-x86_64/egg/gin/torch
    copying build/lib/gin/torch/__init__.py -> build/bdist.linux-x86_64/egg/gin/torch
    creating build/bdist.linux-x86_64/egg/gin/testdata
    copying build/lib/gin/testdata/import_test_configurables.py -> build/bdist.linux-x86_64/egg/gin/testdata
    copying build/lib/gin/testdata/invalid_import.py -> build/bdist.linux-x86_64/egg/gin/testdata
    copying build/lib/gin/testdata/__init__.py -> build/bdist.linux-x86_64/egg/gin/testdata
    creating build/bdist.linux-x86_64/egg/gin/testdata/fake_package
    copying build/lib/gin/testdata/fake_package/__init__.py -> build/bdist.linux-x86_64/egg/gin/testdata/fake_package
    creating build/bdist.linux-x86_64/egg/gin/testdata/fake_package/fake_gin_package
    creating build/bdist.linux-x86_64/egg/gin/testdata/fake_package/fake_gin_package/config
    copying build/lib/gin/testdata/fake_package/fake_gin_package/config/__init__.py -> build/bdist.linux-x86_64/egg/gin/testdata/fake_package/fake_gin_package/config
    copying build/lib/gin/testdata/fake_package/fake_gin_package/__init__.py -> build/bdist.linux-x86_64/egg/gin/testdata/fake_package/fake_gin_package
    copying build/lib/gin/selector_map.py -> build/bdist.linux-x86_64/egg/gin
    byte-compiling build/bdist.linux-x86_64/egg/gin/config_parser.py to config_parser.cpython-37.pyc
    byte-compiling build/bdist.linux-x86_64/egg/gin/resource_reader.py to resource_reader.cpython-37.pyc
      File "build/bdist.linux-x86_64/egg/gin/resource_reader.py", line 18
        from __future__ import google_type_annotations
                                                     ^
    SyntaxError: future feature google_type_annotations is not defined
    
    byte-compiling build/bdist.linux-x86_64/egg/gin/tf/external_configurables.py to external_configurables.cpython-37.pyc
    byte-compiling build/bdist.linux-x86_64/egg/gin/tf/__init__.py to __init__.cpython-37.pyc
    byte-compiling build/bdist.linux-x86_64/egg/gin/tf/utils.py to utils.cpython-37.pyc
    byte-compiling build/bdist.linux-x86_64/egg/gin/__init__.py to __init__.cpython-37.pyc
    byte-compiling build/bdist.linux-x86_64/egg/gin/utils.py to utils.cpython-37.pyc
    byte-compiling build/bdist.linux-x86_64/egg/gin/config.py to config.cpython-37.pyc
    byte-compiling build/bdist.linux-x86_64/egg/gin/torch/external_configurables.py to external_configurables.cpython-37.pyc
    byte-compiling build/bdist.linux-x86_64/egg/gin/torch/__init__.py to __init__.cpython-37.pyc
    byte-compiling build/bdist.linux-x86_64/egg/gin/testdata/import_test_configurables.py to import_test_configurables.cpython-37.pyc
    byte-compiling build/bdist.linux-x86_64/egg/gin/testdata/invalid_import.py to invalid_import.cpython-37.pyc
    byte-compiling build/bdist.linux-x86_64/egg/gin/testdata/__init__.py to __init__.cpython-37.pyc
    byte-compiling build/bdist.linux-x86_64/egg/gin/testdata/fake_package/__init__.py to __init__.cpython-37.pyc
    byte-compiling build/bdist.linux-x86_64/egg/gin/testdata/fake_package/fake_gin_package/config/__init__.py to __init__.cpython-37.pyc
    byte-compiling build/bdist.linux-x86_64/egg/gin/testdata/fake_package/fake_gin_package/__init__.py to __init__.cpython-37.pyc
    byte-compiling build/bdist.linux-x86_64/egg/gin/selector_map.py to selector_map.cpython-37.pyc
    creating build/bdist.linux-x86_64/egg/EGG-INFO
    copying gin_config.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
    copying gin_config.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
    copying gin_config.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
    copying gin_config.egg-info/requires.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
    copying gin_config.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
    zip_safe flag not set; analyzing archive contents...
    creating 'dist/gin_config-0.4.0-py3.7.egg' and adding 'build/bdist.linux-x86_64/egg' to it
    removing 'build/bdist.linux-x86_64/egg' (and everything under it)
    Processing gin_config-0.4.0-py3.7.egg
    Removing /usr/local/lib/python3.7/dist-packages/gin_config-0.4.0-py3.7.egg
    Copying gin_config-0.4.0-py3.7.egg to /usr/local/lib/python3.7/dist-packages
    gin-config 0.4.0 is already the active version in easy-install.pth
    
    Installed /usr/local/lib/python3.7/dist-packages/gin_config-0.4.0-py3.7.egg
    Processing dependencies for gin-config==0.4.0
    Finished processing dependencies for gin-config==0.4.0
    

    The error seems to be thrown by from __future__ import google_type_annotations. Commenting out this import in resource_reader.py resolves the problem. Seems similar to #77, though it's supposed to be resolved.

    Also checked: installing gin-config-0.3.0 directly from pip allows me to import gin without any problems (Though it lacks the bindings for torch)

    opened by mashrurmorshed 8
  • Skip Unknown ignoring @gin.register

    Skip Unknown ignoring @gin.register

    Hi! I've got a lot of functions decorated with @gin.register in ddsp, and I recently removed a kwarg from one of those functions (delta_delta_time_weight from spectral_ops.SpectralLoss). My operative_config for a pretrained model still has the old kwargs in it's config:

    .
    .
    .
    # Parameters for SpectralLoss:
    # ==============================================================================
    SpectralLoss.delta_delta_freq_weight = 0.0
    SpectralLoss.delta_delta_time_weight = 0.0
    .
    .
    .
    

    but when I try to load the gin config with the new codebase that doesn't have the kwarg, it throws an error even though I use skip_unknown=True.

    Command:

    with gin.unlock_config():
      gin.parse_config_file(gin_file, skip_unknown=True)
    

    Error message:

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    <ipython-input-9-2893e878fb89> in <module>()
         55 # Parse gin config,
         56 with gin.unlock_config():
    ---> 57   gin.parse_config_file(gin_file, skip_unknown=True)
         58 
         59 # Assumes only one checkpoint in the folder, 'ckpt-[iter]`.
    
    9 frames
    /usr/local/lib/python3.6/dist-packages/gin/config.py in __new__(cls, binding_key)
        515     if not _might_have_parameter(configurable_.fn_or_cls, arg_name):
        516       err_str = "Configurable '{}' doesn't have a parameter named '{}'."
    --> 517       raise ValueError(err_str.format(selector, arg_name))
        518 
        519     if configurable_.whitelist and arg_name not in configurable_.whitelist:
    
    ValueError: Configurable 'SpectralLoss' doesn't have a parameter named 'delta_delta_freq_weight'.
      In file "/content/pretrained/operative_config-0.gin", line 92
        SpectralLoss.delta_delta_freq_weight = 0.0
    

    As far as I understand, this is the use case of skip_unknown correct? Is it missing it because SpectralLoss is an object wrapped in @gin.register instead of @gin.cofigurable? Is there a way to avoid this error without requiring modifying the gin config file itself?

    opened by jesseengel 7
  • Added isolating states

    Added isolating states

    I'm not sure if this is a direction you'd like to take this, and some would say encapsulation kind of defeats the purpose of the package - but occasionally I want to limit the effect of importing bindings, and there's only so much scopes can do.

    There are a few changes that could be made - e.g. removing globals entirely and changing references to the members of the top stack state. Can't imagine the performance change would be great either way, but happy to be directed.

    opened by jackd 7
  • AttributeError: module 'tensorflow._api.v1.random' has no attribute 'categorical'

    AttributeError: module 'tensorflow._api.v1.random' has no attribute 'categorical'

    Problem Definition

    I receive the following error when running import gin.tf.external_configurables : AttributeError: module 'tensorflow._api.v1.random' has no attribute 'categorical'

    Install Process

    I installed `gin-config' via GitHub, and installed older version to avoid the error #9 :

    git clone https://github.com/google/gin-config.git
    git checkout 89e9c79d465263ce825e718d90413e2bcffadd64
    python -m setup.py install
    

    TraceBack

    Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 05:52:31) 
    [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import gin.tf.external_configurables
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "<frozen importlib._bootstrap>", line 971, in _find_and_load
      File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 656, in _load_unlocked
      File "<frozen importlib._bootstrap>", line 626, in _load_backward_compatible
      File "/Users/user-name/rl/lib/python3.6/site-packages/gin_config-0.1.2-py3.6.egg/gin/tf/external_configurables.py", line 99, in <module>
    AttributeError: module 'tensorflow._api.v1.random' has no attribute 'categorical'
    

    Versions

    tensorflow   1.12.0
    gin-config   0.1.2
    

    Any assistance is greatly appreciated!!

    opened by jinPrelude 5
  • Feature Request: Remove restriction to define all configurables before parsing config file?

    Feature Request: Remove restriction to define all configurables before parsing config file?

    I'm working in a Jupyter notebook. I would like to import gin and gin.parse_config_file('config.gin') at the top of the notebook. Then, inside the notebook, I would like to define a @gin.configurable function which is only needed inside this notebook. Currently, when I do this, I get an error at the point where I parse the config file (before defining the configurable function):

    ValueError: No configurable matching 'my_function'.
      In file "config.gin", line 3
        my_function.my_arg = 'my_string'
    

    Is this behavior intentional? If so, why? (I'm fairly new to python, so it could be that there's a great reason for this, but it's not obvious to me.)

    I think this feature request would make gin more flexible and easier to use in this setting, and it seems like a reasonable behavior given that gin already requires configurable names to be unique. It seems like one of the benefits of gin is providing deep configuration of your python functions and classes, meaning that I can, for example, configure the constructor of a particular class without changing any of the code which instantiates that class to know about the configuration. So it seems counter-intuitive, then, that the same library would require me to define all of my configurable functions and classes in advance of loading my config file, which essentially means that I need an executable script which is completely independent of any (configurable) function or class definitions.

    Without knowing the technical complexity of the change, it seems like the only trade-off in supporting this feature request is that we would lose the ability to give this kind of error message (shown above) upon parsing a config file, because we wouldn't know in advance whether a particular configurable will ever be defined.

    opened by colllin 5
  • Use stored configuration in another script

    Use stored configuration in another script

    Hi, is it possible to store some of the configurations into a file and load them into another file? For example, I have 'train.py' and a configuration file for this script.

    @gin.configurable  
    def train(agent, environment):  
        ...  
    

    gin:

    train.agent = @DQN
    train.environment = @Gym
    ... # some other arguments not needed in eval.py
    

    I have another script called 'eval.py', and I need to reload an agent and an environment.

    @gin.configurable. 
    def eval(agent, environment):  
        ...      
    

    However, two functions have different names, I could not use the configuration file.

    I tried to use '@gin.configurable('main')' to both functions, but since train takes additional arguments that is not needed in eval function, it did not work.

    I also tried to create a function such as get_agent and get_environment which simply takes an argument and returns it, but this does not seem to be a good design choice.

    Do you have any suggestion to resolve this issue? Thank you.

    opened by jdubkim 4
  • Made some fixes:

    Made some fixes:

    Made some fixes:

    • A text reference to the "Full hierarchical configuration" section had the wrong number (4 instead of 5). Replaced by a permalink to make it number-agnostic.
    • In the syntax quick reference table, replaced parameter by param in in the description column for consistency with the syntax.
    cla: no 
    opened by copybara-service[bot] 4
  • Adds support for PyTorch

    Adds support for PyTorch

    adds external configurables for PyTorch:

    • losses
    • optimizers
    • learning rate schedulers
    • activation functions
    • random distributions
    • constants

    this is an updated version of #21 - credit goes to @georgepar

    cla: yes 
    opened by valentindey 4
  • [Feature Requests] Support configuration aliases / names that persist through code refactoring

    [Feature Requests] Support configuration aliases / names that persist through code refactoring

    Hello project maintainers,

    We started evaluating gin-config for wider use in some of our projects. One thing I noticed is that gin config files are tied to symbol names in code (i.e. class names and function names).

    In repositories with high change velocity, refactoring class and function names tend to break their associated config files. Could you provide a functionality to specify stable aliases for configuration names? For example, like this:

    @gin.configurable(alias=model_training_config)
    def train_model(learning_rate):
      pass
    
    model_training_config.learning_rate=0.001
    

    The goal is that when we refactor and change the function name, it's associated configuration should not break. This functionality is usually available in dependency injection frameworks in general.

    Thank you very much for your time.

    -Yi

    opened by yzhuang 4
  • Saving config to wandb/getting a dict of all config values

    Saving config to wandb/getting a dict of all config values

    Hello,

    How exactly would one save a config to a wandb format?

    More concretely, how would one get a dict-like object that gives access to all values that gin could potentially modify.

    Regards, Wamiq

    opened by wamiq-reyaz 0
  • Feature suggestion: defining arbitrary functions only using gin

    Feature suggestion: defining arbitrary functions only using gin

    This may be kinda hacky, but what if I wanted to use some function and only define it via gin file, so this is the solution I came up with: Function:

    #utils.py
    @gin.configurable
    def arbitrary_func(module_name, func_name, kwargs):
      """Returns a function from a module."""
      module = importlib.import_module(module_name)
      func = getattr(module, func_name)
      return func(**kwargs)
    

    Gin file:

    import utils
    lr_schedule/utils.arbitrary_func:
        module_name = "optax"
        func_name = "linear_schedule"
        kwargs = { "init_value":0.005, "end_value":0.1, "transition_steps": 10000 }
    
    optimizer_config.AdamConfig:
        lr_schedule = @lr_schedule/utils.arbitrary_func()
    

    let me know what you think.

    opened by OhadRubin 0
  • Evaluate functions with parameters in configs

    Evaluate functions with parameters in configs

    We are currently working on a project that uses random search to find some hyperparameters and are looking for a solution on how to achieve that with gin. Our ideal solution would basically look like this:

    import Model
    import random_search
    
    Model.learning_rate = @random_search(1, 2, 3, 4, 5)
    Model.num_layers = @random_search(8, 16, 32)
    

    Where gin would call the function random_search we have defined somewhere in our project with the given arguments.

    I guess in general being able to use functions that are evaluated while parsing would be nice to have, I can see other use cases benefitting as well.

    We explored different workarounds, first decorating random_search itself with @gin.configurable and having it add a macro to the config, which leads to something like this:

    import Model
    import random_search
    
    random_search.learning_rate  = [1 , 2, 3, 4, 5]
    Model.learning_rate = %LEARNING_RATE
    random_search.num_layers  = [8, 16, 32]
    Model.num_layers = %NUM_LAYERS
    

    Which works but is just a bit more cumbersome, as every function I'd want to use like this needs to bind the macros now. Additionally, this syntax might be confusing to new users as it is unclear where the macro binding comes from.

    We intermittently also preprocessed/parsed the gin files ourselves and passed the rewritten files to gin, where we allowed a syntax as in the upper example (only for one specific function, not in general), that got replaced by the evaluated function call. E.g. the line Model.learning_rate = random_search(1, 2, 3, 4, 5) became Model.learning_rate = 4 before gin parsed the contents. However, our parser was quite brittle and this didn't work for included files, as these were only parsed by gin and our syntax ofc didn't work in the gin parser.

    This is why we changed to our current approach:

    ...
    # Optimizer params
    Adam.weight_decay = 1e-6
    optimizer/random_search.class_to_configure = @Adam
    optimizer/random_search.lr = [3e-4, 1e-4, 3e-5, 1e-5]
    
    # Encoder params
    LSTMNet.input_dim = %EMB
    LSTMNet.num_classes = %NUM_CLASSES
    model/random_search.class_to_configure = @LSTMNet
    model/random_search.hidden_dim = [32, 64, 128, 256]
    model/random_search.layer_dim = [1, 2, 3]
    
    run_random_searches.scopes = ["model", "optimizer"]
    

    run_random_searches.scopes defines the scopes that the random search runs in. Each scope represents a class which will get bindings with randomly searched parameters. In this example, we have the two scopes model and optimizer. For each scope a class_to_configure needs to be set to the class it represents, in this case LSTMNet and Adam respectively. We can add whichever parameter we want to the classes following this syntax:

    run_random_searches.scopes = ["<scope>", ...]
    <scope>/random_search.class_to_configure = @<SomeClass>
    <scope>/random_search.<param> = ['list', 'of', 'possible', 'values']
    

    The scopes take care of adding the parameters only to the pertinent classes, whereas the random_search() function actually randomly choses a value and binds it to the gin configuration.

    If we want to overwrite the model configuration in a different file, this can be done easily:

    include "configs/models/LSTM.gin"
    
    Adam.lr = 1e-4
    
    model/random_search.hidden_dim = [100, 200]
    

    This configuration for example overwrites the lr parameter of Adam with a concrete value, while it only specifies a different search space for hidden_dim of LSTMNet to run the random search on.

    The code running the random search looks like this:

    @gin.configurable
    def random_search(class_to_configure: type = gin.REQUIRED, **kwargs: dict[str, list]) -> list[str]:
        """Randomly searches parameters for a class and sets gin bindings.
    
        Args:
            class_to_configure: The class that gets configured with the parameters.
            kwargs: A dict containing the name of the parameter and a list of possible values.
    
        Returns:
            The randomly searched parameters.
        """
        randomly_searched_params = []
        for param, values in kwargs.items():
            param_to_set = f"{class_to_configure.__name__}.{param}"
            if f"{param_to_set}=" in gin.config_str().replace(" ", ""):
                continue  # hyperparameter is already set in the config (e.g. from experiment), so skip random search
            value = values[np.random.randint(len(values))]
            randomly_searched_params += [(param_to_set, value)]
        return randomly_searched_params
    
    
    @gin.configurable
    def run_random_searches(scopes: list[str] = gin.REQUIRED) -> list[str]:
        """Executes random searches for the different scopes defined in gin configs.
    
        Args:
            scopes: The gin scopes to explicitly set.
    
        Returns:
            The randomly searched parameters.
        """
        randomly_searched_params = []
        for scope in scopes:
            with gin.config_scope(scope):
                randomly_searched_params += random_search()
        return randomly_searched_params
    

    This works fairly alright for our current setup, but natively supporting function evaluation with parameters would still be preferable.

    Has there been any discussions regarding this topic that I missed or are there any counterarguments to supporting function calls? Or did I just plain miss some functionality that enables something like this already?

    opened by HendrikSchmidt 0
  • Feature request: Read gin file from google cloud bucket path

    Feature request: Read gin file from google cloud bucket path

    Hey, I've seen libraries like T5X that are able to read model configs from cloud buckets. As a product from google I kind of expected this feature to be implemented here and was surprised to find it wasn't. Implementation plan:

    1. save gin file from remote path into a temporary file
    2. use that as the gin file
    opened by OhadRubin 0
  • A gin config files synthax highlighter for VSCode

    A gin config files synthax highlighter for VSCode

    Hello to the gin-config team 👋

    Thanks for the great tool.

    Because I am using gin in many of my projects and reading all white text hurts my eyes I have made a little syntax highlighter for VSCode which I thought I'd share here https://github.com/NielsPichon/gin-VSCode-Extension

    It also is available directly in the VSCode marketplace (for free obviously) https://marketplace.visualstudio.com/items?itemName=NielsPichon.pink-pepper-gin

    Have a nice day

    opened by NielsPichon 0
  • Add a registration method for enum classes

    Add a registration method for enum classes

    There are instances where we may want to pass an enum class as argument of a parameter in a gin config. However because in most cases, enums are not derivable , it is not possible to decorate an enum class with gin.register in most cases.

    This MR introduces a new register_enum function which has a workaround for this problem. It essentially uses constant() under the hood to register the enum class. This is ok I think as enum classes are not meant to be directly instantiated so no arguments may be parametrized in the gin config file.

    opened by NielsPichon 0
Releases(v0.1-alpha)
Owner
Google
Google ❤️ Open Source
Google
Lightweight mmm - Lightweight (Bayesian) Media Mix Model

Lightweight (Bayesian) Media Mix Model This is not an official Google product. L

Google 342 Jan 3, 2023
The deployment framework aims to provide a simple, lightweight, fast integrated, pipelined deployment framework that ensures reliability, high concurrency and scalability of services.

savior是一个能够进行快速集成算法模块并支持高性能部署的轻量开发框架。能够帮助将团队进行快速想法验证(PoC),避免重复的去github上找模型然后复现模型;能够帮助团队将功能进行流程拆解,很方便的提高分布式执行效率;能够有效减少代码冗余,减少不必要负担。

Tao Luo 125 Dec 22, 2022
Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python

deepface Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid

Kushal Shingote 2 Feb 10, 2022
Fiddle is a Python-first configuration library particularly well suited to ML applications.

Fiddle Fiddle is a Python-first configuration library particularly well suited to ML applications. Fiddle enables deep configurability of parameters i

Google 227 Dec 26, 2022
Objax Apache-2Objax (🥉19 · ⭐ 580) - Objax is a machine learning framework that provides an Object.. Apache-2 jax

Objax Tutorials | Install | Documentation | Philosophy This is not an officially supported Google product. Objax is an open source machine learning fr

Google 729 Jan 2, 2023
This project provides an unsupervised framework for mining and tagging quality phrases on text corpora with pretrained language models (KDD'21).

UCPhrase: Unsupervised Context-aware Quality Phrase Tagging To appear on KDD'21...[pdf] This project provides an unsupervised framework for mining and

Xiaotao Gu 146 Dec 22, 2022
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.

Knodle (Knowledge-supervised Deep Learning Framework) - a new framework for weak supervision with neural networks. It provides a modularization for se

null 93 Nov 6, 2022
ImageNet-CoG is a benchmark for concept generalization. It provides a full evaluation framework for pre-trained visual representations which measure how well they generalize to unseen concepts.

The ImageNet-CoG Benchmark Project Website Paper (arXiv) Code repository for the ImageNet-CoG Benchmark introduced in the paper "Concept Generalizatio

NAVER 23 Oct 9, 2022
FastFace: Lightweight Face Detection Framework

Light Face Detection using PyTorch Lightning

Ömer BORHAN 75 Dec 5, 2022
Code for BMVC2021 "MOS: A Low Latency and Lightweight Framework for Face Detection, Landmark Localization, and Head Pose Estimation"

MOS-Multi-Task-Face-Detect Introduction This repo is the official implementation of "MOS: A Low Latency and Lightweight Framework for Face Detection,

null 104 Dec 8, 2022
Pocsploit is a lightweight, flexible and novel open source poc verification framework

Pocsploit is a lightweight, flexible and novel open source poc verification framework

cckuailong 208 Dec 24, 2022
Sequential Model-based Algorithm Configuration

SMAC v3 Project Copyright (C) 2016-2018 AutoML Group Attention: This package is a reimplementation of the original SMAC tool (see reference below). Ho

AutoML-Freiburg-Hannover 778 Jan 5, 2023
My personal Home Assistant configuration.

About This is my personal Home Assistant configuration. My guiding princile is to have full local control of all my devices. I intend everything to ru

Chris Turra 13 Jun 7, 2022
Interactive Terraform visualization. State and configuration explorer.

Rover - Terraform Visualizer Rover is a Terraform visualizer. In order to do this, Rover: generates a plan file and parses the configuration in the ro

Tu Nguyen 2.3k Jan 7, 2023
Public scripts, services, and configuration for running a smart home K3S network cluster

makerhouse_network Public scripts, services, and configuration for running MakerHouse's home network. This network supports: TODO features here For mo

Scott Martin 1 Jan 15, 2022
Learning Efficient Online 3D Bin Packing on Packing Configuration Trees

Learning Efficient Online 3D Bin Packing on Packing Configuration Trees This repository is being continuously updated, please stay tuned! Any code con

null 86 Dec 28, 2022
PyQt6 configuration in yaml format providing the most simple script.

PyamlQt(ぴゃむるきゅーと) PyQt6 configuration in yaml format providing the most simple script. Requirements yaml PyQt6, ( PyQt5 ) Installation pip install Pya

Ar-Ray 7 Aug 15, 2022
Official Pytorch implementation of Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference (ICLR 2022)

The Official Implementation of CLIB (Continual Learning for i-Blurry) Online Continual Learning on Class Incremental Blurry Task Configuration with An

NAVER AI 34 Oct 26, 2022
A Python library that provides a simplified alternative to DBAPI 2

A Python library that provides a simplified alternative to DBAPI 2. It provides a facade in front of DBAPI 2 drivers.

Tony Locke 44 Nov 17, 2021