Black box hyperparameter optimization made easy.

Overview

BBopt

Join the chat at https://gitter.im/evhub/bbopt DOI

BBopt aims to provide the easiest hyperparameter optimization you'll ever do. Think of BBopt like Keras (back when Theano was still a thing) for black box optimization: one universal interface for working with any black box optimization backend.

BBopt's features include:

  • a universal API for defining your tunable parameters based on the standard library random module (so you don't even have to learn anything new!),
  • tons of state-of-the-art black box optimization algorithms such as Gaussian Processes from scikit-optimize or Tree Structured Parzen Estimation from hyperopt for tuning parameters,
  • the ability to switch algorithms while retaining all previous trials and even dynamically choose the best algorithm for your use case,
  • multiprocessing-safe data saving to enable running multiple trials in parallel,
  • lots of data visualization methods, including support for everything in skopt.plots,
  • support for optimizing over conditional parameters that only appear during some runs,
  • support for all major Python versions (2.7 or 3.6+), and
  • a straightforward interface for extending BBopt with your own custom algorithms.

Once you've defined your parameters, training a black box optimization model on those parameters is as simple as

bbopt your_file.py

and serving your file with optimized parameters as easy as

import your_file

Questions? Head over to BBopt's Gitter if you have any questions/comments/etc. regarding BBopt.

Installation

To get going with BBopt, simply install it with

pip install bbopt

or, to also install the extra dependencies necessary for running BBopt's examples, run pip install bbopt[examples].

Basic Usage

To use bbopt, just add

# BBopt setup:
from bbopt import BlackBoxOptimizer
bb = BlackBoxOptimizer(file=__file__)
if __name__ == "__main__":
    bb.run()

to the top of your file, then call a random method like

x = bb.uniform("x", 0, 1)

for each of the tunable parameters in your model, and finally add

bb.maximize(y)      or      bb.minimize(y)

to set the value being optimized. Then, run

bbopt <your file here> -n <number of trials> -j <number of processes>

to train your model, and just

import <your module here>

to serve it!

Note: Neither __file__ nor __name__ are available in Jupyter notebooks. In that case, just setup BBopt with:

import os

# BBopt setup:
from bbopt import BlackBoxOptimizer
bb = BlackBoxOptimizer(data_dir=os.getcwd(), data_name="my_project_name")

Examples

Some examples of BBopt in action:

  • random_example.py: Extremely basic example using the random backend.
  • skopt_example.py: Slightly more complex example making use of the gaussian_process algorithm from the scikit-optimize backend.
  • hyperopt_example.py: Example showcasing the tree_structured_parzen_estimator algorithm from the hyperopt backend.
  • meta_example.py: Example of using run_meta to dynamically choose an algorithm.
  • numpy_example.py: Example which showcases how to have numpy array parameters.
  • conditional_skopt_example.py: Example of having black box parameters that are dependent on other black box parameters using the gaussian_process algorithm from the scikit-optimize backend.
  • conditional_hyperopt_example.py: Example of doing conditional parameters with the tree_structured_parzen_estimator algorithm from the hyperopt backend.
  • bask_example.py: Example of using conditional parameters with a semi-random target using the bask_gp algorithm from the bayes-skopt backend.
  • pysot_example.py: Example of using the full API to implement an optimization loop and avoid the overhead of running the entire file multiple times while making use of the pySOT backend.
  • keras_example.py: Complete example of using BBopt to optimize a neural network built with Keras. Uses the full API to implement its own optimization loop and thus avoid the overhead of running the entire file multiple times.
  • any_fast_example.py: Example of using the default algorithm "any_fast" to dynamically select a good backend.
  • mixture_example.py: Example of using the mixture backend to randomly switch between different algorithms.
  • json_example.py: Example of using json instead of pickle to save parameters.

Full API

Command-Line Interface

The bbopt command is extremely simple in terms of what it actually does. For the command bbopt <file> -n <trials> -j <processes>, BBopt simply runs python <file> a number of times equal to <trials>, split across <processes> different processes.

Why does this work? If you're using the basic boilerplate, then running python <file> will trigger the if __name__ == "__main__": clause, which will run a training episode. But when you go to import your file, the if __name__ == "__main__": clause won't get triggered, and you'll just get served the best parameters found so far. Since the command-line interface is so simple, advanced users who want to use the full API instead of the boilerplate need not use the bbopt command at all. If you want more information on the bbopt command, just run bbopt -h.

Black Box Optimization Methods

Constructor

BlackBoxOptimizer(file, *, tag=None, protocol=None)

BlackBoxOptimizer(data_dir, data_name, *, tag=None, protocol=None)

Create a new bb object; this should be done at the beginning of your program as all the other functions are methods of this object.

file is used by BBopt to figure out where to load and save data to, and should usually just be set to __file__. tag allows additional customization of the BBopt data file for when multiple BBopt instances might be desired for the same file. Specifically, BBopt will save data to os.path.splitext(file)[0] + "_" + tag + extension.

Alternatively, data_dir and data_name can be used to specify where to save and load data to. In that case, BBopt will save data to os.path.join(data_dir, data_name + extension) if no tag is passed, or os.path.join(data_dir, data_name + "_" + tag + extension) if a tag is given.

protocol determines how BBopt serializes data. If None (the default), BBopt will use pickle protocol 2, which is the highest version that works on both Python 2 and Python 3 (unless a json file is present, in which case BBopt will use json). To use the newest protocol instead, pass protocol=-1. If protocol="json", BBopt will use json instead of pickle, which is occasionally useful if you want to access your data outside of Python.

run

BlackBoxOptimizer.run(alg="any_fast")

Start optimizing using the given black box optimization algorithm. Use algs to get the valid values for alg.

If this method is never called, or called with alg="serving", BBopt will just serve the best parameters found so far, which is how the basic boilerplate works. Note that, if no saved parameter data is found, and a guess is present, BBopt will use that, which is a good way of distributing your parameter values without including all your saved parameter data.

In addition to supporting all algorithms in algs, run also supports the following pseudo-algorithms which defer to run_meta:

  • "any_fast" (same as calling run_meta with a suite of algorithms selected for their speed except that some algorithms are ignored if unsupported parameter definition functions are used, e.g. normalvariate for scikit-optimize) (used if run is called with no args)
  • "any_hyperopt" (equivalent to calling run_meta with all hyperopt algorithms)
  • "any_skopt" (equivalent to calling run_meta with all scikit-optimize algorithms)
  • "any_pysot" (equivalent to calling run_meta with all pySOT algorithms)

algs

BlackBoxOptimizer.algs

A dictionary mapping the valid algorithms for use in run to the pair (backend, kwargs) of the backend and arguments to that backend that the algorithm corresponds to.

Supported algorithms are:

  • "serving" (serving backend) (used if run is never called)
  • "random" (random backend)
  • "tree_structured_parzen_estimator" (hyperopt backend)
  • "adaptive_tpe" (hyperopt backend; but only Python 3+)
  • "annealing" (hyperopt backend)
  • "gaussian_process" (scikit-optimize backend)
  • "random_forest" (scikit-optimize backend)
  • "extra_trees" (scikit-optimize backend)
  • "gradient_boosted_regression_trees" (scikit-optimize backend)
  • "bask_gaussian_process" (bayes-skopt backend)
  • "stochastic_radial_basis_function" (pySOT backend)
  • "expected_improvement" (pySOT backend)
  • "DYCORS" (pySOT backend)
  • "lower_confidence_bound" (pySOT backend)
  • "latin_hypercube" (pySOT backend)
  • "symmetric_latin_hypercube" (pySOT backend)
  • "two_factorial" (pySOT backend)
  • "epsilon_max_greedy" (mixture backend)
  • "epsilon_greedy" (bandit backend)
  • "boltzmann_exploration" (bandit backend)
  • "boltzmann_gumbel_exploration" (bandit backend) (the default meta_alg in run_meta)

Additionally, there are also some algorithms of the form safe_<other_alg> which use mixture to defer to <other_alg> if <other_alg> supports the parameter definition functions you're using, otherwise default to a suitable replacement.

Note: The bayes-skopt backend is only available on Python 3.7+ and the pySOT backend is only available on Python 3+.

run_meta

BlackBoxOptimizer.run_meta(algs, meta_alg="boltzmann_gumbel_exploration")

run_meta is a special version of run that uses the meta_alg algorithm to dynamically pick an algorithm from among the given algs. Both algs and meta_alg can use any algorithms in algs.

run_backend

BlackBoxOptimizer.run_backend(backend, *args, **kwargs)

The base function behind run. Instead of specifying an algorithm, run_backend lets you specify the specific backend you want to call and the parameters you want to call it with. Different backends do different things with the remaining arguments:

  • scikit-optimize passes the arguments to skopt.Optimizer,
  • hyperopt passes the arguments to fmin,
  • mixture expects a distribution argument to specify the mixture of different algorithms to use, specifically a list of (alg, weight) tuples (and also admits a remove_erroring_algs bool to automatically remove erroring algorithms),
  • bayes-skopt passes the arguments to bask.Optimizer, and
  • pySOT expects a strategy (either a strategy class or one of "SRBF", "EI", "DYCORS", "LCB"), a surrogate (either a surrogate class or one of "RBF", "GP"), and a design (either an experimental design class or one of None, "latin_hypercube", "symmetric_latin_hypercube", "two_factorial").

Note: The bayes-skopt backend is only available on Python 3.7+ and the pySOT backend is only available on Python 3+.

minimize

BlackBoxOptimizer.minimize(value)

Finish optimizing and set the loss for this run to value. To start another run, call run again.

maximize

BlackBoxOptimizer.maximize(value)

Same as minimize but sets the gain instead of the loss.

remember

BlackBoxOptimizer.remember(info)

Update the current run's "memo" field with the given info dictionary. Useful for saving information about a run that shouldn't actually impact optimization but that you would like to have access to later (using get_best_run, for example).

plot_convergence

BlackBoxOptimizer.plot_convergence(ax=None, yscale=None)

Plot the running best gain/loss over the course of all previous trials. If passed, ax should be the matplotlib axis to plot on and yscale should be the scale for the y axis.

Run BBopt's keras example to generate an example plot.

plot_history

BlackBoxOptimizer.plot_history(ax=None, yscale=None)

Plot the gain/loss at each point over the course of all previous trials. If passed, ax should be the matplotlib axis to plot on and yscale should be the scale for the y axis.

Run BBopt's keras example to generate an example plot.

partial_dependence

BlackBoxOptimizer.partial_dependence(i_name, j_name=None, sample_points=None, n_samples=250, n_points=40)

Calls skopt.plots.partial_dependence using previous trial data. The parameters i_name and j_name should be set to names of the parameters you want for the i and j arguments to skopt.plots.partial_dependence.

plot_partial_dependence_1D

BlackBoxOptimizer.plot_partial_dependence_1D(i_name, ax=None, yscale=Non, sample_points=None, n_samples=250, n_points=40)

Plot the partial dependence of i_name on the given matplotlib axis ax and with the given y axis scale yscale. See partial_dependence for the meaning of the other parameters.

Run BBopt's keras example to generate an example plot.

plot_evaluations

BlackBoxOptimizer.plot_evaluations(bins=20)

Calls skopt.plots.plot_evaluations using previous trial data.

Run BBopt's keras example to generate an example plot.

plot_objective

BlackBoxOptimizer.plot_objective(levels=10, n_points=40, n_samples=250, size=2, zscale="linear")

Calls skopt.plots.plot_objective using previous trial data.

Run BBopt's keras example to generate an example plot.

plot_regret

BlackBoxOptimizer.plot_regret([ax, [true_minimum, [yscale]]])

Calls skopt.plots.plot_regret using previous trial data.

Run BBopt's keras example to generate an example plot.

get_skopt_result

BlackBoxOptimizer.get_skopt_result()

Gets an OptimizeResult object usable by skopt.plots functions. Allows for arbitrary manipulation of BBopt optimization results in scikit-optimize including any plotting functions not natively supported by BBopt.

get_current_run

BlackBoxOptimizer.get_current_run()

Get information on the current run, including the values of all parameters encountered so far and the loss/gain of the run if specified yet.

get_best_run

BlackBoxOptimizer.get_best_run()

Get information on the best run so far. These are the parameters that will be used if run is not called.

get_data

BlackBoxOptimizer.get_data(print_data=False)

Dump a dictionary containing "params"—the parameters BBopt knows about and what random function and arguments they were initialized with—and "examples"—all the previous data BBopt has collected. If print_data, pretty prints the data in addition to returning it.

data_file

BlackBoxOptimizer.data_file

The path of the file where BBopt is saving data to.

is_serving

BlackBoxOptimizer.is_serving

Whether BBopt is currently using the "serving" algorithm.

tell_examples

BlackBoxOptimizer.tell_examples(examples)

Add the given examples as in get_data to memory, writing the new data to data_file. Must come before run if you want the new data to be included in the model for that run.

backend

BlackBoxOptimizer.backend

The backend object being used by the current BlackBoxOptimizer instance.

run_id

BlackBoxOptimizer.run_id

The id of the current run if started by the BBopt command-line interface.

Parameter Definition Methods

Every BBopt parameter definition method has the form

bb.<random function>(<name>, <args>, **kwargs)

where

  • the method itself specifies what distribution is being modeled,
  • the first argument is always name, a unique string identifying that parameter,
  • following name are whatever arguments are needed to specify the distribution's parameters, and
  • at the end are keyword arguments, which are the same for all the different methods. The supported kwargs are:
    • guess, which specifies the initial value for the parameter, and
    • placeholder_when_missing, which specifies what placeholder value a conditional parameter should be given if missing.

Important note: Once you bind a name to a parameter, you cannot change that parameter's options. Thus, if the options defining your parameters can vary from run to run, you must use a different name for each possible combination.

randrange

BlackBoxOptimizer.randrange(name, stop, **kwargs)

BlackBoxOptimizer.randrange(name, start, stop, step=1, **kwargs)

Create a new parameter modeled by random.randrange(start, stop, step).

Backends which support randrange: scikit-optimize, hyperopt, bayes-skopt, pySOT, random.

randint

BlackBoxOptimizer.randint(name, a, b, **kwargs)

Create a new parameter modeled by random.randint(a, b), which is equivalent to random.randrange(a, b-1).

Backends which support randint: scikit-optimize, hyperopt, bayes-skopt, pySOT, random.

getrandbits

BlackBoxOptimizer.getrandbits(name, k, **kwargs)

Create a new parameter modeled by random.getrandbits(k), which is equivalent to random.randrange(0, 2**k).

Backends which support getrandbits: scikit-optimize, hyperopt, bayes-skopt, pySOT, random.

choice

BlackBoxOptimizer.choice(name, seq, **kwargs)

Create a new parameter modeled by random.choice(seq), which chooses an element from seq.

Backends which support choice: scikit-optimize, hyperopt, bayes-skopt, pySOT, random.

randbool

BlackBoxOptimizer.randbool(name, **kwargs)

Create a new boolean parameter, modeled by the equivalent of random.choice([False, True]).

Backends which support randbool: scikit-optimize, hyperopt, bayes-skopt, pySOT, random.

sample

BlackBoxOptimizer.sample(name, population, k, **kwargs)

Create a new parameter modeled by random.sample(population, k), which chooses k elements from population.

Backends which support sample: scikit-optimize, hyperopt, bayes-skopt, pySOT, random.

shuffled

BlackBoxOptimizer.shuffled(name, population, **kwargs)

Create a new parameter modeled by random.shuffle(population) except that it returns the shuffled list instead of shuffling it in place. An in-place version as BlackBoxOptimizer.shuffle is also supported.

Backends which support shuffled: scikit-optimize, hyperopt, bayes-skopt, pySOT, random.

random

BlackBoxOptimizer.random(name, **kwargs)

Create a new parameter modeled by random.random(), which is equivalent to random.uniform(0, 1) except that 1 is disallowed.

Backends which support random: scikit-optimize, hyperopt, bayes-skopt, pySOT, random.

uniform

BlackBoxOptimizer.uniform(name, a, b, **kwargs)

Create a new parameter modeled by random.uniform(a, b), which uniformly selects a float in the range [a, b].

Backends which support uniform: scikit-optimize, hyperopt, bayes-skopt, pySOT, random.

loguniform

BlackBoxOptimizer.loguniform(name, min_val, max_val, **kwargs)

Create a new parameter modeled by

math.exp(random.uniform(math.log(min_val), math.log(max_val)))

which logarithmically selects a float between min_val and max_val.

Backends which support loguniform: scikit-optimize, hyperopt, bayes-skopt, pySOT, random.

normalvariate

BlackBoxOptimizer.normalvariate(name, mu, sigma, **kwargs)

Create a new parameter modeled by random.normalvariate(mu, sigma).

A shortcut for the standard normal distribution is also available via BlackBoxOptimizer.stdnormal.

Backends which support normalvariate: hyperopt, random.

lognormvariate

BlackBoxOptimizer.lognormvariate(name, mu, sigma, **kwargs)

Create a new parameter modeled by random.lognormvariate(mu, sigma) such that the natural log is a normal distribution with mean mu and standard deviation sigma.

Backends which support lognormvariate: hyperopt, random.

rand

BlackBoxOptimizer.rand(name, *shape, **kwargs)

Create a new parameter modeled by numpy.random.rand(*shape), which creates a numpy array of the given shape with entries generated uniformly in [0, 1).

Backends which support rand: scikit-optimize, hyperopt, bayes-skopt, pySOT, random.

randn

BlackBoxOptimizer.randn(name, *shape, **kwargs)

Create a new parameter modeled by numpy.random.randn(*shape), which creates a numpy array of the given shape with entries generated according to a standard normal distribution.

Backends which support randn: hyperopt, random.

param

BlackBoxOptimizer.param(name, func, *args, **kwargs)

Create a new parameter modeled by the parameter definition function func with the given arguments. This function is mostly useful if you want to use a custom backend that implements parameter definition functions not included in BBopt by default.

Writing Your Own Backend

BBopt's backend system is built to be extremely extensible, allowing anyone to write and register their own BBopt backends. The basic template for writing a BBopt backend is as follows:

from bbopt.backends.util import StandardBackend

class MyBackend(StandardBackend):
    backend_name = "my-backend"
    implemented_funcs = [
        # list the random functions you support here
        #  (you don't need to include all random functions,
        #  only base random functions, primarily randrange,
        #  choice, uniform, and normalvariate)
        ...,
    ]

    def setup_backend(self, params, **options):
        # initialize your backend; you can use params
        #  to get the args for each param

    def tell_data(self, new_data, new_losses):
        # load new data points into your backend; new_data is
        #  a list of dictionaries containing data and new_losses
        #  is a list of losses for each of those data points

    def get_next_values(self):
        # return the values you want to use for this run as a dict

MyBackend.register()
MyBackend.register_alg("my_alg")

Once you've written a BBopt backend as above, you simply need to import it to trigger the register calls and enable it to be used in BBopt. For some example BBopt backends, see BBopt's default backends (written in Coconut):

Comments
  • run_backend with Mixture option

    run_backend with Mixture option

    Hello, first of all very nice and easy to use package. For some time on google colab i am trying to use the mixture functionality with multiple algorithms but it always picks up "random" . some code which i have used

    jupyter cell1

    from bbopt.backends.mixture import MixtureBackend from bbopt import BlackBoxOptimizer

    jupyter cell 2

    bb = BlackBoxOptimizer("file", tag='index_'+str(comp_idx))

    jupyter cell 3

    bb.run_backend("mixture", [("random", 1), ('gaussian_process',1), ('random_forest',1), ('gradient_boosted_regression_trees',1)]) OR bb.run_backend("mixture", [('gradient_boosted_regression_trees',1), ("random", 1), ('gaussian_process',1), ('random_forest',1) ]) ......... Model code ..... if isinstance(bb.backend, MixtureBackend): bb.remember({ "alg": bb.backend.selected_alg, "l1_reg": l1_reg, ...... }) bb.maximize(vald_f1)

    jupyter cell 4

    loop = 70 opt_verbose = 0 for i in range(loop): try: run_me_RNN(opt_verbose) print("Summary of run {}/{}:".format(i+1, loop)) pprint(bb.get_current_run()) print() except Exception as e: print('Something went wrong...', e) print("Summary of run {}/{}:".format(i+1, loop)) pprint(bb.get_current_run()) print()

    print("\nSummary of best run:") pprint(bb.get_optimal_run()) ####################################

    I have tired a couple of times running the code but it always picks up "random" algorithm. Do i need more iterations or is there something missing at my end. On the other hand #bb.run(alg='gaussian_process') #gradient_boosted_regression_trees runs perfectly fine ##################### On a side note , I am doing multi-label classification(unbalanced dataset), and taking validation-f1 score in maximize optimization. Just want to make sure if its the right error metrics that I am optimizing. Some other parameters I have are training & validation loss , training f1-score.

    Any help on the above issues will be greatly appreciated.

    Many thanks

    question 
    opened by akbaramed 8
  • TypeError: Cannot cast array data from dtype('<U4') to dtype('int64') according to the rule 'safe'

    TypeError: Cannot cast array data from dtype('

    Traceback (most recent call last): File "bayesian_opt_DL.py", line 137, in run_trial() File "bayesian_opt_DL.py", line 89, in run_trial bb.run() File "/home/b7066789/.local/lib/python3.6/site-packages/bbopt/optimizer.py", line 107, in run self.run_backend(backend, **kwargs) File "/home/b7066789/.local/lib/python3.6/site-packages/bbopt/optimizer.py", line 95, in run_backend self.backend = init_backend(backend, self._examples, self._old_params, *args, **kwargs) File "/home/b7066789/.local/lib/python3.6/site-packages/bbopt/registry.py", line 105, in init_backend return backend_registry[name](examples, params, *args, **kwargs) File "/home/b7066789/.local/lib/python3.6/site-packages/bbopt/backends/hyperopt.py", line 137, in init (next)(FMinIter(algo, domain, trials, rstate, **kwargs)) File "/home/b7066789/.local/lib/python3.6/site-packages/hyperopt/fmin.py", line 237, in next self.run(1, block_until_done=self.asynchronous) File "/home/b7066789/.local/lib/python3.6/site-packages/hyperopt/fmin.py", line 202, in run self.rstate.randint(2 ** 31 - 1)) File "/home/b7066789/.local/lib/python3.6/site-packages/hyperopt/tpe.py", line 901, in suggest print_node_on_error=False) File "/home/b7066789/.local/lib/python3.6/site-packages/hyperopt/pyll/base.py", line 913, in rec_eval rval = scope._impls[node.name](*args, **kwargs) File "/home/b7066789/.local/lib/python3.6/site-packages/hyperopt/pyll/base.py", line 1076, in bincount return np.bincount(x, weights, minlength) TypeError: Cannot cast array data from dtype('<U4') to dtype('int64') according to the rule 'safe'

    bug 
    opened by sakalouski 4
  • Not able to run the optimizer more than 10 times for Keras implementation

    Not able to run the optimizer more than 10 times for Keras implementation

    Hi!

    I don't know if I should post this here but I encountered some difficulties trying to run your code on a Keras example more than 10 times at the 11th time I get an error like this:

    TypeError: get_params() missing 1 required positional argument: 'self'

    I don't why I get this if the code was able to run 10 times previously! I hope you can help.

    Thanks in advance :)

    bug 
    opened by eagq 3
  • [BUG]: `numpy.random.mtrand.RandomState` object has no attribute `integers` -> switch to `np.random.Generator`

    [BUG]: `numpy.random.mtrand.RandomState` object has no attribute `integers` -> switch to `np.random.Generator`

    Hey,

    I found a bug, which is easy to fix, the hyperopt backend does not use the deprecated np.random.RandomState anymore and replaced it with np.random.Generator, this means the following line:

    https://github.com/evhub/bbopt/blob/408e210e57b7a2aaf3cfd3a3c225fc2af6b3c56d/bbopt/backends/hyperopt.py#L176

    Should be changed to:

        def setup_backend(self, params, algo=tpe.suggest, rstate=np.random.default_rng(), show_progressbar=False, **options): 
    

    This is explained in the following issue on hyperopt:

    https://github.com/hyperopt/hyperopt/issues/838

    Best, Max

    bug 
    opened by themasterlink 1
  • [FEATURE]: Run Id should be available in a process

    [FEATURE]: Run Id should be available in a process

    While using bbopt each run process has its own run id, which is not available during each run.

    It would be awesome if the run id can either be accessed via the BlackBoxOptimizer object or via an environment variable, which has to be then set for each process.

    enhancement 
    opened by themasterlink 1
  • Add meta backend

    Add meta backend

    Should work like the mixture backend except it uses a bandit strategy (eps greedy, boltzmann, boltzmann-gumbel) to pick between backends rather than a fixed mixture distribution.

    enhancement 
    opened by evhub 1
  • Support all scikit-optimize visualization methods

    Support all scikit-optimize visualization methods

    See skopt.plots. Currently, BBopt only supports plot_convergence. The idea would be to use BBopt's internal machinery for converting to skopt objects in plotting.

    enhancement 
    opened by evhub 1
  • Add a Gitter chat badge to README.md

    Add a Gitter chat badge to README.md

    evhub/bbopt now has a Chat Room on Gitter

    @evhub has just created a chat room. You can visit it here: https://gitter.im/evhub/bbopt.

    This pull-request adds this badge to your README.md:

    Gitter

    If my aim is a little off, please let me know.

    Happy chatting.

    PS: Click here if you would prefer not to receive automatic pull-requests from Gitter in future.

    opened by gitter-badger 0
  • Compatibility with Colab?

    Compatibility with Colab?

    Is Bbopt compatible with Jupyter Notebooks? I'm trying to test it in Colab, but I'm having issues getting it to work outside of a .py format.

    Thanks for the help and the very cool package.

    enhancement 
    opened by kmedved 4
  • Add a framework for dealing with converting old bbopt data to new bbopt data

    Add a framework for dealing with converting old bbopt data to new bbopt data

    In case of the introduction of breaking changes to the data format, should have the ability to automatically convert data forward and tests to check that it works.

    enhancement 
    opened by evhub 0
  • bbopt seems to be stuck on old version of networkx

    bbopt seems to be stuck on old version of networkx

    I started playing with mycroft-precise for some personal stuff and bbopt seems stuck on a really old version of networkx.

    That causes problems like this: https://github.com/MycroftAI/mycroft-precise/issues/92

    and makes for ugly workarounds of nailing down versions like this: https://github.com/MycroftAI/mycroft-precise/pull/93

    enhancement 
    opened by gcoon151 6
  • ValueError: 16 is not in list

    ValueError: 16 is not in list

    Hi,here is the problem when i using BBopt in my project. Traceback (most recent call last): File "I:/intention recognition/MyBBopt/testFile.py", line 167, in <module> run_trial() File "I:/intention recognition/MyBBopt/testFile.py", line 81, in run_trial bb.run() File "D:\Anaconda\lib\site-packages\bbopt\optimizer.py", line 255, in run self.run_backend(backend, **options) File "D:\Anaconda\lib\site-packages\bbopt\optimizer.py", line 242, in run_backend self.backend = init_backend(backend, self._examples, self._old_params, *args, **options) File "D:\Anaconda\lib\site-packages\bbopt\backends\hyperopt.py", line 132, in __init__ trial_list = examples_to_trials(examples, params) File "D:\Anaconda\lib\site-packages\bbopt\backends\hyperopt.py", line 103, in examples_to_trials for k, v in zip(sorted(params), make_features(ex["values"], params, fallback_func=lambda name, func, *args, **kwargs: NA, converters={"choice": lambda val, choices: choices.index(val), "randrange": lambda val, start, stop, step: val - start}, convert_fallback=False)): File "D:\Anaconda\lib\site-packages\bbopt\backends\util.py", line 90, in make_features feature = converter_func(feature, *args) File "D:\Anaconda\lib\site-packages\bbopt\backends\hyperopt.py", line 103, in <lambda> for k, v in zip(sorted(params), make_features(ex["values"], params, fallback_func=lambda name, func, *args, **kwargs: NA, converters={"choice": lambda val, choices: choices.index(val), "randrange": lambda val, start, stop, step: val - start}, convert_fallback=False)): ValueError: 16 is not in list

    i really confused,what's the meaning of 16? it doesn't exist in my project. Could you give me any advice?

    bug 
    opened by wunanfang 1
  • [Errno 9] Bad file descriptor

    [Errno 9] Bad file descriptor

    Hi,

    running the code from the command line. Tried using json - did not help. However, if I comment self._load_data(), things seem to work. The question is: does the comment break the optimization process?

    Thank You!

    optimize.py def reload(self): """Completely reload the optimizer.""" self._old_params = {} self._examples = [] #self._load_data() <-------------------------------------------------COMMENTED self.run(alg=None) # backend is set to serving by default

    Traceback (most recent call last): File "test.py", line 26, in bb = BlackBoxOptimizer(file=file,use_json=True) File "/home/b7066789/.local/lib/python3.6/site-packages/bbopt/optimizer.py", line 72, in init self.reload() File "/home/b7066789/.local/lib/python3.6/site-packages/bbopt/optimizer.py", line 78, in reload self._load_data() File "/home/b7066789/.local/lib/python3.6/site-packages/bbopt/optimizer.py", line 192, in _load_data with Lock(self.data_file, "rb", timeout=lock_timeout) as df: File "/home/b7066789/.local/lib/python3.6/site-packages/portalocker/utils.py", line 197, in enter return self.acquire() File "/home/b7066789/.local/lib/python3.6/site-packages/portalocker/utils.py", line 157, in acquire raise exceptions.LockException(exception) portalocker.exceptions.LockException: [Errno 9] Bad file descriptor

    System: CentOS 7

    pip freeze absl-py==0.2.2 anndata==0.6.6 args==0.1.0 asdf==2.1.0 ast2vec==0.3.8a0 astetik==1.9.5 astor==0.6.2 astropy==3.0.4 atomicwrites==1.2.1 attrs==18.2.0 backcall==0.1.0 bblfsh==2.9.13 bbopt==0.4.1 beautifulsoup4==4.6.3 biopython==1.72 bleach==1.5.0 boto==2.49.0 boto3==1.9.86 botocore==1.12.86 bs4==0.0.1 bz2file==0.98 cachetools==2.1.0 certifi==2018.8.24 chances==0.1.4 chardet==3.0.4 Click==7.0 clint==0.5.1 colorama==0.3.9 cycler==0.10.0 cymem==2.0.2 cytoolz==0.9.0.1 datasketch==1.4.1 decorator==4.3.0 dill==0.2.9 docker==3.5.0 docker-pycreds==0.3.0 docutils==0.14 dulwich==0.19.6 EasyProcess==0.2.3 en-core-web-sm==2.0.0 entrypoints==0.2.3 fa2==0.2 flake8==3.6.0 flake8-polyfill==1.0.2 flatbuffers==1.10 funcsigs==1.0.2 funcy==1.11 future==0.17.1 gast==0.2.0 gensim==3.7.0 geonamescache==1.0.1 google-api-core==1.4.0 google-auth==1.5.1 google-auth-httplib2==0.0.3 google-cloud-core==0.25.0 google-cloud-storage==1.2.0 google-resumable-media==0.3.1 googleapis-common-protos==1.5.3 GPUtil==1.3.0 graphviz==0.10.1 grpcio==1.10.0 grpcio-tools==1.10.0 h5py==2.7.1 html5lib==0.9999999 HTMLParser==0.0.2 HTSeq==0.10.0 httplib2==0.11.3 humanize==0.5.1 hyperas==0.4 hyperopt==0.1.1 idna==2.7 igraph==0.1.11 imageio==2.4.1 ipykernel==4.8.2 ipython==6.4.0 ipython-genutils==0.2.0 ipywidgets==7.4.2 jedi==0.12.0 Jinja2==2.10 jmespath==0.9.3 joblib==0.11 jsonschema==2.6.0 jupyter==1.0.0 jupyter-client==5.2.3 jupyter-console==5.2.0 jupyter-core==4.4.0 Keras==2.2.4 Keras-Applications==1.0.7 Keras-Preprocessing==1.0.9 kerasplotlib==0.1.4 kiwisolver==1.0.1 langdetect==1.0.7 lightgbm==2.2.2 livelossplot==0.2.2 llvmlite==0.23.0 louvain==0.6.1 lxml==4.2.1 lz4==2.1.0 mando==0.6.4 Markdown==2.6.11 MarkupSafe==1.0 matplotlib==3.0.2 mccabe==0.6.1 mistune==0.8.3 mnnpy==0.1.9.4 modelforge==0.7.0 modin==0.2.5 more-itertools==4.3.0 msgpack==0.5.6 msgpack-numpy==0.4.3.2 murmurhash==1.0.1 natsort==5.3.1 nbconvert==5.3.1 nbformat==4.4.0 netifaces==0.10.7 networkx==1.11 nltk==3.4 notebook==5.6.0 numba==0.38.0 numexpr==2.6.5 numpy==1.14.5 pandas==0.23.4 pandocfilters==1.4.2 parquet==1.2 parso==0.2.0 patool==1.12 patsy==0.5.0 pexpect==4.5.0 pickleshare==0.7.4 Pillow==5.1.0 plac==0.9.6 pluggy==0.7.1 ply==3.11 portalocker==1.4.0 pqdict==1.0.0 preshed==2.0.1 prometheus-client==0.3.1 prompt-toolkit==1.0.15 protobuf==3.6.1 psutil==5.4.8 ptyprocess==0.5.2 py==1.6.0 py4j==0.10.7 pyasn1==0.4.4 pyasn1-modules==0.2.2 pycodestyle==2.4.0 pyflakes==2.0.0 Pygments==2.2.0 pyLDAvis==2.1.2 pymongo==3.7.2 pyparsing==2.2.0 pysam==0.14.1 pyspark==2.4.0 PyStemmer==1.3.0 pytest==3.8.2 pytextrank==1.1.0 python-dateutil==2.7.3 python-igraph==0.7.1.post6 python-libsbml==5.17.0 python-pptx==0.6.9 python-snappy==0.5.3 pytz==2018.4 pyunpack==0.1.2 PyWavelets==1.0.1 PyYAML==3.12 pyzmq==17.1.2 qtconsole==4.4.1 radon==2.4.0 ray==0.6.0 redis==3.0.1 regex==2018.1.10 requests==2.20.1 rsa==4.0 s3transfer==0.1.13 scanpy==1.2.2 scikit-image==0.13.1 scikit-learn==0.20.1 scikit-optimize==0.5.2 scipy==1.1.0 seaborn==0.8.1 selectolax==0.1.8 semantic-version==2.6.0 Send2Trash==1.5.0 sh==1.12.14 shap==0.28.5 simplegeneric==0.8.1 simplejson==3.16.0 singledispatch==3.4.0.3 six==1.11.0 sklearn==0.0 smart-open==1.8.0 sourced-engine==0.6.4 sourced-ml==0.6.3 spacy==2.0.18 statistics==1.0.3.5 statsmodels==0.9.0 stocal==1.2 summa==1.2.0 tables==3.4.3 talos==0.4.8 tensorboard==1.12.2 tensorflow==1.12.0 termcolor==1.1.0 terminado==0.8.1 testpath==0.3.1 Theano==1.0.2 thinc==6.12.1 thriftpy==0.3.9 tmsc==0.1.5a0 toolz==0.9.0 torch==0.4.1 torchvision==0.2.1 tornado==5.1 tqdm==4.31.1 traitlets==4.3.2 typing==3.6.6 ujson==1.35 umap==0.1.1 umap-learn==0.2.3 urllib3==1.23 wcwidth==0.1.7 websocket-client==0.53.0 Werkzeug==0.14.1 widgetsnbextension==3.4.0 wrangle==0.5.1 wrapt==1.10.11 xgboost==0.72 XlsxWriter==1.0.5 xmlutils==1.4

    The code:

    X_train, Y_train, X_val, Y_val = data()

    def run_trial(): """Run one trial of hyperparameter optimization.""" # Start BBopt: bb.run() input_shape = 8208

    h_n_num = bb.randint('num_neur',5,1000)
    act = bb.choice('activ_func',['selu','relu','elu'])
    num_lay = bb.randint('num_hidden_layers',0,10)
    dout = bb.uniform("dropout", 0, 1)
    lr = bb.uniform("init_learn_rate", 1e-5, 0.1)
    bsize = bb.choice('batch_size',[8,16,32,64,128])
    
    # Create model:
    a = Input(shape=(input_shape,))
    b = Dense(h_n_num,activation=act)(a)
    b = Dropout(dout)(b)
    for l in range(num_lay):
        b = Dense(h_n_num,activation=act)(b)
        b = Dropout(dout)(b)
    output = Dense(1,activation='linear',name='out')(b)
    
    model = keras.models.Model(inputs=a, outputs=output)
    opt = Nadam(lr)
    model.compile(optimizer = opt, loss=mse)
    
    # Train model:
    history = model.fit(x=X_train[:-70], y=Y_train[:-70],batch_size=bsize,epochs=1,
                              validation_data=(X_train[-70:],Y_train[-70:]),
                                 verbose=0,
                                 validation_split = 0.4,
                                 callbacks=[EarlyStopping(patience=30), 
                                            ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=25,verbose=0)], 
                                 shuffle=False)
    
    train_loss = history.history["loss"][-1]
    val_loss = history.history["val_loss"][-1]
    
    bb.remember({
        "train_loss": train_loss,
        "val_loss": val_loss,
    })
    
    bb.minimize(val_loss)
    

    num_trials = 5 result = []

    for i in tqdm(range(num_trials)): run_trial() result.append(bb.get_current_run()) if len(result)>1: [i['memo'].update(i['values']) for i in result] temp = [i['memo'] for i in result] pd.DataFrame(temp).to_csv('./transfer_learning/DL_optimization_reports/patch_weekly_5000_trials.csv')

    bug 
    opened by sakalouski 2
Releases(v1.4.2)
Owner
Evan Hubinger
AGI safety researcher at @machine-intelligence. Previously at @openai, @google, @Yelp, and @ripple. (he/him/his)
Evan Hubinger
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 5, 2022
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 5, 2022
Distributed Asynchronous Hyperparameter Optimization better than HyperOpt.

UltraOpt : Distributed Asynchronous Hyperparameter Optimization better than HyperOpt. UltraOpt is a simple and efficient library to minimize expensive

null 98 Aug 16, 2022
Keras + Hyperopt: A very simple wrapper for convenient hyperparameter optimization

This project is now archived. It's been fun working on it, but it's time for me to move on. Thank you for all the support and feedback over the last c

Max Pumperla 2.1k Jan 3, 2023
optimization routines for hyperparameter tuning

Optunity is a library containing various optimizers for hyperparameter tuning. Hyperparameter tuning is a recurrent problem in many machine learning t

Marc Claesen 398 Nov 9, 2022
Distributed Asynchronous Hyperparameter Optimization in Python

Hyperopt: Distributed Hyperparameter Optimization Hyperopt is a Python library for serial and parallel optimization over awkward search spaces, which

null 6.5k Jan 1, 2023
Hyperparameter Optimization for TensorFlow, Keras and PyTorch

Hyperparameter Optimization for Keras Talos • Key Features • Examples • Install • Support • Docs • Issues • License • Download Talos radically changes

Autonomio 1.6k Dec 15, 2022
Automated Hyperparameter Optimization Competition

QQ浏览器2021AI算法大赛 - 自动超参数优化竞赛 ACM CIKM 2021 AnalyticCup 在信息流推荐业务场景中普遍存在模型或策略效果依赖于“超参数”的问题,而“超参数"的设定往往依赖人工经验调参,不仅效率低下维护成本高,而且难以实现更优效果。因此,本次赛题以超参数优化为主题,从真

null 20 Dec 9, 2021
HyperaPy: An automatic hyperparameter optimization framework ⚡🚀

hyperpy HyperPy: An automatic hyperparameter optimization framework Description HyperPy: Library for automatic hyperparameter optimization. Build on t

Sergio Mora 7 Sep 6, 2022
A Lightweight Hyperparameter Optimization Tool 🚀

Lightweight Hyperparameter Optimization ?? The mle-hyperopt package provides a simple and intuitive API for hyperparameter optimization of your Machin

null 136 Jan 8, 2023
2021-AIAC-QQ-Browser-Hyperparameter-Optimization-Rank6

2021-AIAC-QQ-Browser-Hyperparameter-Optimization-Rank6

Aigege 8 Mar 31, 2022
Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet

Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet, CVPR2021 安全AI挑战者计划第六期:ImageNet无限制对抗攻击 决赛第四名(team name: Advers)

null 51 Dec 1, 2022
transfer attack; adversarial examples; black-box attack; unrestricted Adversarial Attacks on ImageNet; CVPR2021 天池黑盒竞赛

transfer_adv CVPR-2021 AIC-VI: unrestricted Adversarial Attacks on ImageNet CVPR2021 安全AI挑战者计划第六期赛道2:ImageNet无限制对抗攻击 介绍 : 深度神经网络已经在各种视觉识别问题上取得了最先进的性能。

null 25 Dec 8, 2022
Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"

Output Diversified Sampling (ODS) This is the github repository for the NeurIPS 2020 paper "Diversity can be Transferred: Output Diversification for W

null 50 Dec 11, 2022
[CVPR 2021] Pytorch implementation of Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs

Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs In this work, we propose a framework HijackGAN, which enables non-linear latent space travers

Hui-Po Wang 46 Sep 5, 2022
Explainer for black box models that predict molecule properties

Explaining why that molecule exmol is a package to explain black-box predictions of molecules. The package uses model agnostic explanations to help us

White Laboratory 172 Dec 19, 2022
A method that utilized Generative Adversarial Network (GAN) to interpret the black-box deep image classifier models by PyTorch.

A method that utilized Generative Adversarial Network (GAN) to interpret the black-box deep image classifier models by PyTorch.

Yunxia Zhao 3 Dec 29, 2022
Code for "Retrieving Black-box Optimal Images from External Databases" (WSDM 2022)

Retrieving Black-box Optimal Images from External Databases (WSDM 2022) We propose how a user retreives an optimal image from external databases of we

joisino 5 Apr 13, 2022
A web-based application for quick, scalable, and automated hyperparameter tuning and stacked ensembling in Python.

Xcessiv Xcessiv is a tool to help you create the biggest, craziest, and most excessive stacked ensembles you can think of. Stacked ensembles are simpl

Reiichiro Nakano 1.3k Nov 17, 2022