DeepOBS: A Deep Learning Optimizer Benchmark Suite

Overview

DeepOBS - A Deep Learning Optimizer Benchmark Suite

DeepOBS

PyPI version Documentation Status License: MIT

DeepOBS is a benchmarking suite that drastically simplifies, automates and improves the evaluation of deep learning optimizers.

It can evaluate the performance of new optimizers on a variety of real-world test problems and automatically compare them with realistic baselines.

DeepOBS automates several steps when benchmarking deep learning optimizers:

  • Downloading and preparing data sets.
  • Setting up test problems consisting of contemporary data sets and realistic deep learning architectures.
  • Running the optimizers on multiple test problems and logging relevant metrics.
  • Reporting and visualization the results of the optimizer benchmark.

DeepOBS Output

This branch contains the beta of version 1.2.0 with TensorFlow and PyTorch support. It is currently in a pre-release state. Not all features are implemented and most notably we currently don't provide baselines for this version.

The full documentation of this beta version is available on readthedocs: https://deepobs-with-pytorch.readthedocs.io/

The paper describing DeepOBS has been accepted for ICLR 2019 and can be found here: https://openreview.net/forum?id=rJg6ssC5Y7

If you find any bugs in DeepOBS, or find it hard to use, please let us know. We are always interested in feedback and ways to improve DeepOBS.

Installation

pip install -e git+https://github.com/fsschneider/[email protected]#egg=DeepOBS

We tested the package with Python 3.6, TensorFlow version 1.12, Torch version 1.1.0 and Torchvision version 0.3.0. Other versions might work, and we plan to expand compatibility in the future.

Further tutorials and a suggested protocol for benchmarking deep learning optimizers can be found on https://deepobs-with-pytorch.readthedocs.io/

Comments
  • Request: Share the hyper-parameters found in the grid search

    Request: Share the hyper-parameters found in the grid search

    To lessen the burden of re-running the benchmark, would it be possible to publish the optimal hyper-parameters somewhere?

    By-reusing those hyper-parameters, one would avoid the most computationally-demanding part of reproducing the results (by 1-2 orders of magnitude).

    opened by jotaf98 2
  • Add functionality to skip existing runs, plotting modes, some refactoring

    Add functionality to skip existing runs, plotting modes, some refactoring

    • Adding parameter skip_if_exists to runner.run
      • Default value is set such that the current behavior is maintained
      • By setting to True, runs that already have a .json output file will not be executed again
    • Possible extensions
      • Make skip_if_exists arg-parsable
    opened by f-dangel 2
  • KeyError: 'optimizer_hyperparams'

    KeyError: 'optimizer_hyperparams'

    (Apologies for creating multiple issues in a row -- it seemed more clean to keep them separate.)

    I downloaded the data from DeepOBS_Baselines, and attempted to run example_analyze_pytorch.py. Unfortunately DeepOBS seems to look for keys in the JSON files that don't exist:

    $ python example_analyze_pytorch.py
    /users/user/Research/deepobs/deepobs/analyzer/shared_utils.py:144: RuntimeWarning: Metric valid_accu
    racies does not exist for testproblem quadratic_deep. We now use fallback metric valid_losses
      default_metric), RuntimeWarning)
    /users/user/Research/deepobs/deepobs/analyzer/shared_utils.py:229: RuntimeWarning: All settings for
    /scratch/local/ssd/user/data/deepobs/quadratic_deep/SGD on test problem quadratic_deep have the same
     number of seeds runs. Mode 'most' does not make sense and we use the fallback mode 'final'
      .format(optimizer_path, testproblem_name), RuntimeWarning)
    {'Performance': 127.96759578159877, 'Speed': 'N.A.', 'Hyperparameters': {'lr': 0.01, 'momentum': 0.9
    9, 'nesterov': False}, 'Training Parameters': {}}
    /users/user/Research/deepobs/deepobs/analyzer/shared_utils.py:144: RuntimeWarning: Metric valid_accu
    racies does not exist for testproblem quadratic_deep. We now use fallback metric valid_losses
      default_metric), RuntimeWarning)
    /users/user/Research/deepobs/deepobs/analyzer/shared_utils.py:229: RuntimeWarning: All settings for
    /scratch/local/ssd/user/data/deepobs/quadratic_deep/SGD on test problem quadratic_deep have the same
     number of seeds runs. Mode 'most' does not make sense and we use the fallback mode 'final'
      .format(optimizer_path, testproblem_name), RuntimeWarning)
    /users/user/Research/deepobs/deepobs/analyzer/shared_utils.py:150: RuntimeWarning: Cannot fallback t
    o metric valid_losses for optimizer MomentumOptimizer on testproblem quadratic_deep. Will now fallba
    ck to metric test_losses
      testproblem_name), RuntimeWarning)
    /users/user/miniconda3/lib/python3.7/site-packages/numpy/core/_methods.py:193: RuntimeWarning: inva$
    id value encountered in subtract
      x = asanyarray(arr - arrmean)
    /users/user/miniconda3/lib/python3.7/site-packages/numpy/lib/function_base.py:3949: RuntimeWarning:
    invalid value encountered in multiply
      x2 = take(ap, indices_above, axis=axis) * weights_above
    Traceback (most recent call last):
      File "example_analyze_pytorch.py", line 17, in <module>
        analyzer.plot_optimizer_performance(result_path, reference_path=base + '/deepobs/baselines/quad$
    atic_deep/MomentumOptimizer')
      File "/users/user/Research/deepobs/deepobs/analyzer/analyze.py", line 514, in plot_optimizer_perfo
    rmance
        which=which)
      File "/users/user/Research/deepobs/deepobs/analyzer/analyze.py", line 462, in _plot_optimizer_perf
    ormance
        optimizer_path, mode, metric)
      File "/users/user/Research/deepobs/deepobs/analyzer/shared_utils.py", line 206, in create_setting_
    analyzer_ranking
        setting_analyzers = _get_all_setting_analyzer(optimizer_path)
      File "/users/user/Research/deepobs/deepobs/analyzer/shared_utils.py", line 184, in _get_all_settin
    g_analyzer
        setting_analyzers.append(SettingAnalyzer(sett_path))
      File "/users/user/Research/deepobs/deepobs/analyzer/shared_utils.py", line 260, in __init__
        self.aggregate = aggregate_runs(path)
      File "/users/user/Research/deepobs/deepobs/analyzer/shared_utils.py", line 101, in aggregate_runs
        aggregate['optimizer_hyperparams'] = json_data['optimizer_hyperparams']
    KeyError: 'optimizer_hyperparams'
    

    One of the JSON files in question looks like this (data points snipped for brevity):

    {
    "train_losses": [353.9337594168527, 347.5994306291853, 331.35902622767856, 307.2468915666853, ... 97.28871154785156, 91.45470428466797, 96.45774841308594, 86.27237701416016],
    "optimizer": "MomentumOptimizer",
    "testproblem": "quadratic_deep",
    "weight_decay": null,
    "batch_size": 128,
    "num_epochs": 100,
    "learning_rate": 1e-05,
    "lr_sched_epochs": null,
    "lr_sched_factors": null,
    "random_seed": 42,
    "train_log_interval": 1,
    "hyperparams": {"momentum": 0.99, "use_nesterov": false}
    }
    

    The obvious key seems to be hyperparams as opposed to optimizer_hyperparams; this occurs only for some JSON files.

    Edit: Having fixed this, there is a further key error on training_params. Perhaps these were generated with different versions of the package.

    opened by jotaf98 3
  • Installation error / unmentioned dependency

    Installation error / unmentioned dependency "bayes_opt"

    Attempting to install by following the documentation's instructions, after installing all the mentioned dependencies with conda, results in the following error:

    (base) user@server:~$ pip install -e git+https://github.com/abahde/DeepOBS.git@master#egg=DeepOBS
    Obtaining DeepOBS from git+https://github.com/abahde/DeepOBS.git@master#egg=DeepOBS
      Cloning https://github.com/abahde/DeepOBS.git (to revision master) to ./src/deepobs
      Running command git clone -q https://github.com/abahde/DeepOBS.git /users/user/src/deepobs
        ERROR: Complete output from command python setup.py egg_info:
        ERROR: Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/users/user/src/deepobs/setup.py", line 5, in <module>
            from deepobs import __version__
          File "/users/user/src/deepobs/deepobs/__init__.py", line 5, in <module>
            from . import analyzer
          File "/users/user/src/deepobs/deepobs/analyzer/__init__.py", line 2, in <module>
            from . import analyze
          File "/users/user/src/deepobs/deepobs/analyzer/analyze.py", line 12, in <module>
            from ..tuner.tuner_utils import generate_tuning_summary
          File "/users/user/src/deepobs/deepobs/tuner/__init__.py", line 4, in <module>
            from .bayesian import GP
          File "/users/user/src/deepobs/deepobs/tuner/bayesian.py", line 3, in <module>
            from bayes_opt import UtilityFunction
        ModuleNotFoundError: No module named 'bayes_opt'
        ----------------------------------------
    ERROR: Command "python setup.py egg_info" failed with error code 1 in /users/user/src/deepobs/
    

    Is this bayes_opt package really necessary? It seems a bit tangential to the package's purpose (or at most optional).

    Edit: It turns out that bayesian-optimization has relatively few requirements so this is not a big issue; perhaps just the docs need updating.

    As an aside, it might be possible to suggest a single conda command that installs everything: conda install -c conda-forge seaborn matplotlib2tikz bayesian-optimization.

    opened by jotaf98 0
  • Wall-clock time plots

    Wall-clock time plots

    Optimizers can have very different runtimes per iteration, especially 2nd-order ones.

    This means that sometimes, despite promises of "faster" convergence, the wall-clock time taken to converge is disappointingly larger.

    Is there any chance DeepOBS could implement wall-clock time plots, in addition to per-epoch ones? (E.g. X axis in minutes or hours.)

    opened by jotaf98 4
  • Improve estimate_runtime()

    Improve estimate_runtime()

    There are a couple of improvements that I suggest:

    • [ ] Return the results not as a string, but as a dict or an object.
    • [ ] (Maybe, think about that) Include the ability to test multiple optimizers simultaneously.
    • [ ] Report standard deviation and individual runtimes for SGD.
    • [ ] Add a function that generates a figure, similar to https://github.com/ludwigbald/probprec/blob/master/code/exp_perf_prec/analyze.py
    opened by ludwigbald 0
  • Implement validation set split also for TensorFlow

    Implement validation set split also for TensorFlow

    In PyTorch we split the validation set from the training set randomly. It has the size of the test set. The validation performance is used by the tuner and analyzer to obtain the best instance. This split should be implemented in the TensorFlow data sets as well. We have already prepared the test problem and the runner implementations for this change. The only change that needs to be done to the runner is marked in the code with a ToDo flag.

    bug enhancement 
    opened by abahde 0
Releases(v1.2.0-beta)
  • v1.2.0-beta(Sep 17, 2019)

    Draft of release notes:

    • A PyTorch implementation (though not for all test problems yet)
    • A refactored Analyzer module (more flexibility and interpretability)
    • A Tuning module that automates the tuning process
    • Some minor improvements of the TensorFlow code (important bugfix: fmnist_mlp now really uses F-MNIST and not MNIST)
    • For the PyTorch code a validation set metric for each test problem. However, so far, the TensorFlow code comes without validation sets.
    • Runners now break from training if the loss becomes NaN.
    • Runners now return the output dictionary.
    • Additional training parameters can be passed as kwargs to the run() method.
    • Numpy is now also seeded.
    • Small and large benchmark sets are now global variables in DeepOBS.
    • Default test problem settings are now a global variable in DeepOBS.
    • JSON output is now dumped in human readable format.
    • Accuracy is now only printed if available.
    • Simplified Runner API.
    • Learning Rate Schedule Runner is now an extra class.
    Source code(tar.gz)
    Source code(zip)
Owner
Aaron Bahde
Graduate student at the University of Tübingen, Methods of Machine Learning
Aaron Bahde
Ranger deep learning optimizer rewrite to use newest components

Ranger21 - integrating the latest deep learning components into a single optimizer Ranger deep learning optimizer rewrite to use newest components Ran

Less Wright 266 Dec 28, 2022
ESGD-M - A stochastic non-convex second order optimizer, suitable for training deep learning models, for PyTorch

ESGD-M - A stochastic non-convex second order optimizer, suitable for training deep learning models, for PyTorch

Katherine Crowson 53 Dec 29, 2022
PyTorch implementation DRO: Deep Recurrent Optimizer for Structure-from-Motion

DRO: Deep Recurrent Optimizer for Structure-from-Motion This is the official PyTorch implementation code for DRO-sfm. For technical details, please re

Alibaba Cloud 56 Dec 12, 2022
AdamW optimizer and cosine learning rate annealing with restarts

AdamW optimizer and cosine learning rate annealing with restarts This repository contains an implementation of AdamW optimization algorithm and cosine

Maksym Pyrozhok 133 Dec 20, 2022
A mini library for Policy Gradients with Parameter-based Exploration, with reference implementation of the ClipUp optimizer from NNAISENSE.

PGPElib A mini library for Policy Gradients with Parameter-based Exploration [1] and friends. This library serves as a clean re-implementation of the

NNAISENSE 56 Jan 1, 2023
auto-tuning momentum SGD optimizer

YellowFin YellowFin is an auto-tuning optimizer based on momentum SGD which requires no manual specification of learning rate and momentum. It measure

Jian Zhang 288 Nov 19, 2022
Ranger - a synergistic optimizer using RAdam (Rectified Adam), Gradient Centralization and LookAhead in one codebase

Ranger-Deep-Learning-Optimizer Ranger - a synergistic optimizer combining RAdam (Rectified Adam) and LookAhead, and now GC (gradient centralization) i

Less Wright 1.1k Dec 21, 2022
Apollo optimizer in tensorflow

Apollo Optimizer in Tensorflow 2.x Notes: Warmup is important with Apollo optimizer, so be sure to pass in a learning rate schedule vs. a constant lea

Evan Walters 1 Nov 9, 2021
This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras)

Yogi-Optimizer_Keras This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras) The NeurIPS-Paper can be found here: http://papers.nips.c

null 14 Sep 13, 2022
An Implicit Function Theorem (IFT) optimizer for bi-level optimizations

iftopt An Implicit Function Theorem (IFT) optimizer for bi-level optimizations. Requirements Python 3.7+ PyTorch 1.x Installation $ pip install git+ht

The Money Shredder Lab 2 Dec 2, 2021
AdamW optimizer for bfloat16 models in pytorch.

Image source AdamW optimizer for bfloat16 models in pytorch. Bfloat16 is currently an optimal tradeoff between range and relative error for deep netwo

Alex Rogozhnikov 8 Nov 20, 2022
Storage-optimizer - Identify potintial optimizations on the cloud storage accounts

Storage Optimizer Identify potintial optimizations on the cloud storage accounts

Zaher Mousa 1 Feb 13, 2022
QuanTaichi evaluation suite

QuanTaichi: A Compiler for Quantized Simulations (SIGGRAPH 2021) Yuanming Hu, Jiafeng Liu, Xuanda Yang, Mingkuan Xu, Ye Kuang, Weiwei Xu, Qiang Dai, W

Taichi Developers 120 Jan 4, 2023
Semantic Scholar's Author Disambiguation Algorithm & Evaluation Suite

S2AND This repository provides access to the S2AND dataset and S2AND reference model described in the paper S2AND: A Benchmark and Evaluation System f

AI2 54 Nov 28, 2022
Evaluation suite for large-scale language models.

This repo contains code for running the evaluations and reproducing the results from the Jurassic-1 Technical Paper (see blog post), with current support for running the tasks through both the AI21 Studio API and OpenAI's GPT3 API.

null 71 Dec 17, 2022
Signals-backend - A suite of card games written in Python

Card game A suite of card games written in the Python language. Features coming

null 1 Feb 15, 2022
A machine learning benchmark of in-the-wild distribution shifts, with data loaders, evaluators, and default models.

WILDS is a benchmark of in-the-wild distribution shifts spanning diverse data modalities and applications, from tumor identification to wildlife monitoring to poverty mapping.

P-Lambda 437 Dec 30, 2022
DeepMind Alchemy task environment: a meta-reinforcement learning benchmark

The DeepMind Alchemy environment is a meta-reinforcement learning benchmark that presents tasks sampled from a task distribution with deep underlying structure.

DeepMind 188 Dec 25, 2022