Dragonfly is an open source python library for scalable Bayesian optimisation.

Overview


Dragonfly is an open source python library for scalable Bayesian optimisation.

Bayesian optimisation is used for optimising black-box functions whose evaluations are usually expensive. Beyond vanilla optimisation techniques, Dragonfly provides an array of tools to scale up Bayesian optimisation to expensive large scale problems. These include features/functionality that are especially suited for high dimensional optimisation (optimising for a large number of variables), parallel evaluations in synchronous or asynchronous settings (conducting multiple evaluations in parallel), multi-fidelity optimisation (using cheap approximations to speed up the optimisation process), and multi-objective optimisation (optimising multiple functions simultaneously).

Dragonfly is compatible with Python2 (>= 2.7) and Python3 (>= 3.5) and has been tested on Linux, macOS, and Windows platforms. For documentation, installation, and a getting started guide, see our readthedocs page. For more details, see our paper.

 

Installation

See here for detailed instructions on installing Dragonfly and its dependencies.

Quick Installation: If you have done this kind of thing before, you should be able to install Dragonfly via pip.

$ sudo apt-get install python-dev python3-dev gfortran # On Ubuntu/Debian
$ pip install numpy
$ pip install dragonfly-opt -v

Testing the Installation: You can import Dragonfly in python to test if it was installed properly. If you have installed via source, make sure that you move to a different directory to avoid naming conflicts.

$ python
>>> from dragonfly import minimise_function
>>> # The first argument below is the function, the second is the domain, and the third is the budget.
>>> min_val, min_pt, history = minimise_function(lambda x: x ** 4 - x**2 + 0.1 * x, [[-10, 10]], 10);  
...
>>> min_val, min_pt
(-0.32122746026750953, array([-0.7129672]))

Due to stochasticity in the algorithms, the above values for min_val, min_pt may be different. If you run it for longer (e.g. min_val, min_pt, history = minimise_function(lambda x: x ** 4 - x**2 + 0.1 * x, [[-10, 10]], 100)), you should get more consistent values for the minimum.

If the installation fails or if there are warning messages, see detailed instructions here.

 

Quick Start

Dragonfly can be used directly in the command line by calling dragonfly-script.py or be imported in python code via the maximise_function function in the main library or in ask-tell mode. To help get started, we have provided some examples in the examples directory. See our readthedocs getting started pages (command line, Python, Ask-Tell) for examples and use cases.

Command line: Below is an example usage in the command line.

$ cd examples
$ dragonfly-script.py --config synthetic/branin/config.json --options options_files/options_example.txt

In Python code: The main APIs for Dragonfly are defined in dragonfly/apis. For their definitions and arguments, see dragonfly/apis/opt.py and dragonfly/apis/moo.py. You can import the main API in python code via,

from dragonfly import minimise_function, maximise_function
func = lambda x: x ** 4 - x**2 + 0.1 * x
domain = [[-10, 10]]
max_capital = 100
min_val, min_pt, history = minimise_function(func, domain, max_capital)
print(min_val, min_pt)
max_val, max_pt, history = maximise_function(lambda x: -func(x), domain, max_capital)
print(max_val, max_pt)

Here, func is the function to be maximised, domain is the domain over which func is to be optimised, and max_capital is the capital available for optimisation. The domain can be specified via a JSON file or in code. See here, here, here, here, here, here, here, here, here, here, and here for more detailed examples.

In Ask-Tell Mode: Ask-tell mode provides you more control over your experiments where you can supply past results to our API in order to obtain a recommendation. See the following example for more details.

For a comprehensive list of uses cases, including multi-objective optimisation, multi-fidelity optimisation, neural architecture search, and other optimisation methods (besides Bayesian optimisation), see our readthe docs pages (command line, Python, Ask-Tell)).

 

Contributors

Kirthevasan Kandasamy: github, webpage
Karun Raju Vysyaraju: github, linkedin
Anthony Yu: github, linkedin
Willie Neiswanger: github, webpage
Biswajit Paria: github, webpage
Chris Collins: github, webpage

Acknowledgements

Research and development of the methods in this package were funded by DOE grant DESC0011114, NSF grant IIS1563887, the DARPA D3M program, and AFRL.

Citation

If you use any part of this code in your work, please cite our JMLR paper.

@article{JMLR:v21:18-223,
  author  = {Kirthevasan Kandasamy and Karun Raju Vysyaraju and Willie Neiswanger and Biswajit Paria and Christopher R. Collins and Jeff Schneider and Barnabas Poczos and Eric P. Xing},
  title   = {Tuning Hyperparameters without Grad Students: Scalable and Robust Bayesian Optimisation with Dragonfly},
  journal = {Journal of Machine Learning Research},
  year    = {2020},
  volume  = {21},
  number  = {81},
  pages   = {1-27},
  url     = {http://jmlr.org/papers/v21/18-223.html}
}

License

This software is released under the MIT license. For more details, please refer LICENSE.txt.

For questions, please email [email protected].

"Copyright 2018-2019 Kirthevasan Kandasamy"

Comments
  • Add a DiscreteEuclideanDomain

    Add a DiscreteEuclideanDomain

    This is a very rough pass at adding a DiscreteEuclideanDomain. This is not currently functioning. This PR is mostly to check if I am heading in the right direction. I am very uncertain with the config code, so that is almost a pure guess.

    The end goal to be able to create optimization problems like the following:

    import numpy as np
    from dragonfly.exd.domains import DiscreteEuclideanDomain
    from dragonfly import maximise_function
    
    size = 10
    dim = 3
    valid_points = np.random.rand(size, dim)
    dom = DiscreteEuclideanDomain(valid_points)
    maximise_function(lambda x: np.linalg.norm(x), dom, 10)
    
    opened by crcollins 15
  • installing dragonfly under pypy fails

    installing dragonfly under pypy fails

    I've tried to accelerate the dragonfly execution by using pypy, which is a python environment way faster than traditional python.

    The problem it has is compatibility and that is precise the case with dragonfly.

    Any idea if dragonfly will be supported under pypy at any time?

    opened by alelasantillan 13
  • What is the version of tensorflow is used in neural architecture search?

    What is the version of tensorflow is used in neural architecture search?

    What is the version of tensorflow is used in neural architecture search? I'm using tensorflow 1.14.0 and running code demo_nas.py encountered the following error:

      File "/home/albert_wei/WorkSpaces_2020/dragonfly-master/dragonfly/exd/exd_core.py", line 707, in run_experiments
        self.run_experiment_initialise()
      File "/home/albert_wei/WorkSpaces_2020/dragonfly-master/dragonfly/exd/exd_core.py", line 466, in run_experiment_initialise
        self.perform_initial_queries()
      File "/home/albert_wei/WorkSpaces_2020/dragonfly-master/dragonfly/exd/exd_core.py", line 350, in perform_initial_queries
        self._wait_for_a_free_worker()
      File "/home/albert_wei/WorkSpaces_2020/dragonfly-master/dragonfly/exd/exd_core.py", line 497, in _wait_for_a_free_worker
        self.worker_manager.get_poll_time_real())
      File "/home/albert_wei/WorkSpaces_2020/dragonfly-master/dragonfly/exd/exd_core.py", line 487, in _wait_till_free
        self._update_history(qinfo)
      File "/home/albert_wei/WorkSpaces_2020/dragonfly-master/dragonfly/exd/exd_core.py", line 229, in _update_history
        self._exd_child_update_history(qinfo)
      File "/home/albert_wei/WorkSpaces_2020/dragonfly-master/dragonfly/opt/blackbox_optimiser.py", line 95, in _exd_child_update_history
        self._update_opt_point_and_val(qinfo, query_is_at_fidel_to_opt)
      File "/home/albert_wei/WorkSpaces_2020/dragonfly-master/dragonfly/opt/blackbox_optimiser.py", line 118, in _update_opt_point_and_val
        if qinfo.val > self.curr_opt_val:
    TypeError: '>' not supported between instances of 'str' and 'float'
    

    How to fix this error? Thanks.

    opened by auroua 6
  • Update tests to run in a reasonable amount of time

    Update tests to run in a reasonable amount of time

    There are a few changes that are included in this PR:

    1. Using nosetests to collect all the tests instead of a bash script
    2. Annotating longer tests with skip (later on these can be annotated with more specific labels if warranted). After this change the tests now run in ~350 seconds instead of ~13200 seconds. It is possible to include the skipped tests by adding --no-skip to the nose call in run_all_tests.sh.
    3. Added a simple try/except catch for the NN imports failing in tests.
    4. Added requirements-dev.txt to enumerate the requirements for a development install (sklearn, matplotlib, nose, etc).
    5. Added an ENVVAR to set the backend for matplotlib (this may not be needed in the future, but it
    6. Adding all the tests to TravisCI builds

    This does not resolve the issues with stochastic tests, so this may cause false positives when running tests later.

    With this change, coverage tests also come for free, it is just a matter of adding --with-coverage --cover-package=dragonfly --cover-erase to the nose call. Down the road, this can also be used in conjunction with Coveralls to include that information in the same way that TravisCI is (if it is wanted).

    opened by crcollins 5
  • ip_mroute.c copying corrupt packet to userspace (IGMPMSG_NOCACHE)

    ip_mroute.c copying corrupt packet to userspace (IGMPMSG_NOCACHE)

    I have a bad feeling about ip_mroute.c corrupting a packet when it's copying from kernel to userspace, inside of IGMPMSG_NOCACHE upcall.

    I am currently writing an IGMP daemon and porting that to Dragonfly. Code below works fine on linux / free/open and netbsd.

    Somehow my daemon is receving an incorrect destination IP adress in the copied packet. Considering following incoming packet and debug log snippet. Note that only the last tuple (250) seems to be correct. 13:55:54.286252 IP 192.168.1.105.3423 > 239.255.255.250.3423: UDP, length 54 13:55:54:2863 Route activation request for group: 1.0.1.250 from src: 192.168.1.105 not valid. Ignoring

    Code for receiving packet from kernel: struct iovec ioVec[1] = { { recv_buf, BUF_SIZE } }; struct msghdr msgHdr = (struct msghdr){ NULL, 0, ioVec, 1, &cmsgUn, sizeof(cmsgUn), MSG_DONTWAIT }; int recvlen = recvmsg(pollFD[0].fd, &msgHdr, 0); acceptIgmp(recvlen, msgHdr);

    Code to process packet: void acceptIgmp(int recvlen, struct msghdr msgHdr) { struct igmpmsg *igmpMsg = (struct igmpmsg *)(recv_buf); struct ip *ip = (struct ip *)recv_buf; register uint32_t src = ip->ip_src.s_addr, dst = ip->ip_dst.s_addr, group;

    switch (igmpMsg->im_msgtype) { case IGMPMSG_NOCACHE: for (i=0;i<recvlen;i++) sprintf(bla+i*5,"0x%02hhx:",recv_buf[i]); my_log(LOG_DEBUG,0,"BUFFER: %s",bla);

    This code then proceeds to output the buffer: 13:55:54:2863 BUFFER: 0x45:0x00:0x52:0x00:0xda:0xed:0x00:0x40:0x04:0x11:0xe9:0xa1:0xc0:0xa8:0x01:0x69:0x01:0x00:0x01:0xfa: The data in this buffer is a correct IP header, except for the destination

    Source in packet copied kernel: 0xc0:0xa8:0x01:0x69 (192.168.1.105) Destination: 0x01:0x00:0x01:0xfa (1.0.1.250) Destination should be: 0xef:0xff:0xff:0xfa (239.255.255.250)

    It may be me, but I cannot seem to find any documentation pertaining to the multicast routing api on dragonfly.

    opened by Uglymotha 4
  • Skip evaluation errors

    Skip evaluation errors

    These are minimal fixes to handle errors in the objective function evaluation, using the EVAL_ERROR_CODE constant defined in exd_utils.py. With these fixes, using a randomly failing objective appears to work reasonably well, provided that the acquisition optimization is random (i.e. Thompson sampling, or multiobjective optimization by MOORS).

    opened by h-rummukainen 4
  • Passing integers to the function to be minimised

    Passing integers to the function to be minimised

    I've defined the domain this way: def main(): """ Main function. """ domain_bounds = [[-5, 10], [0, 15]] max_capital = 100 (as in branin in_code_demo.py)

    My objective function need to get some integers as hyperparameters. Can we specify type of data into domain_bounds? ====some time passed and I tried this, following supernova:

    14 def main(): 15 """ Main function. """ 16 #domain_bounds = [[-5, 10], [0, 15]] 17 #max_capital = 100 18 #domain_bounds = [[20070614,20070614], [8, 64], [4, 24], [8,64], [80,6400], [9,29]] 19 max_capital = 200 20 21 domain_vars = [{'name': 'date', 'type': 'int', 'min': 20070614, 'max': 20070614}, 22 {'name': 'T', 'type': 'int', 'min': 8, 'max': 64}, 23 {'name': 'B', 'type': 'int', 'min': 4, 'max': 24}, 24 {'name': 'E', 'type': 'int', 'min': 8, 'max': 64}, 25 {'name': 'sigmasquare', 'type': 'int', 'min': 80, 'max': 6400}, 26 {'name': 'DeltaT', 'type': 'int', 'min': 9, 'max': 29}] 27 config_params = {'domain': domain_vars} 28 config = load_config(config_params) 29 #opt_val, opt_pt, _ = maximise_function(branin, domain_bounds, max_capital) #alela 30 #opt_val, opt_pt, _ = minimise_function(my_func_to_minimize, domain_bounds, max_capital) #alela 31 opt_val, opt_pt, _ = minimise_function(my_func_to_minimize, config.domain, max_capital, config=config) #alela 32 print('Optimum Value in %d evals: %0.4f'%(max_capital, opt_val)) 33 print('Optimum Point: %s'%(opt_pt)) 34
    35 if name == 'main': 36 main()

    But for some reason it aborts the execution with the error:

    File "in_code_demo_alela2.py", line 37, in main() File "in_code_demo_alela2.py", line 32, in main opt_val, opt_pt, _ = minimise_function(my_func_to_minimize, config.domain, max_capital) #alela File "/Users/alejandrosantillaniturres/Desktop/programming/python/virtualenv_ataa/lib/python3.7/site-packages/dragonfly/apis/opt.py", line 215, in minimise_function max_val, opt_pt, history = maximise_function(func_to_max, *args, **kwargs) File "/Users/alejandrosantillaniturres/Desktop/programming/python/virtualenv_ataa/lib/python3.7/site-packages/dragonfly/apis/opt.py", line 170, in maximise_function domain_orderings=config.domain_orderings) AttributeError: 'NoneType' object has no attribute 'domain_orderings'

    Any idea what's going on? Thank you!

    opened by alelasantillan 4
  • Too much time spent in isolated iteration

    Too much time spent in isolated iteration

    I was measuring the time spent in each iteration of minimise_function and I've noticed that in my laptop it takes less than a second most of the time, but sometimes an iteration takes an enormous amount of time. For example this:

    myfunction_spent_time: 3 init dragonfly after computing the function: Thu Apr 25 18:18:35 EDT 2019 final dragonfly after computing the function: Thu Apr 25 18:27:19 EDT 2019

    Almost 10 minutes. This is weird. Any idea where the time is being consumed? Is there any way to limit the execution time to some maximum amount of time?

    opened by alelasantillan 2
  • Parallel evaluation with ei

    Parallel evaluation with ei

    Hi! I tried to run parallel Bayesian optimization for the branin function with following options & codes:

    options = [
                       {'name':'capital_type','default':'return_value'}, 
                       {'name':'build_new_model_every', 'default': 17}, 
                       {'name':'init_capital','default':10}, 
                       {'name':'initial_method','default':'rand'},
                       {'name':'euc_init_method','default':'latin_hc'}, 
                       {'name':'acq', 'default':'ei'},
                       {'name': 'handle_parallel', 'default': 'halluc'}, 
                       {'name': 'acq_opt_max_evals', 'default': 3}, 
                       {'name':'domain_kernel_type', 'default':'matern'}, 
                       {'name':'domain_matern_nu', 'default':2.5} 
                       ]
            options = load_options(options)
            min_val, min_pt, history = minimise_function(func, domain,opt_method='bo', max_capital =    60, options=options)
    

    in which func is the name of branin function and domain is the list of computation domain, but the acquisition function always return the same position for evaluation, same problem also happens with ucb acquisition function. Evaluated inputs: [[-0.53138911 2.16230016] [ 8.00611623 3.71417474] [ 5.71641293 11.39269664] [ 5.1832132 14.4029029 ] [-2.62100231 5.23051845] [ 8.76691612 0.13967317] [ 1.67777453 12.11758913] [-4.62652079 9.34890705] [ 3.23838423 8.55028105] [ 0.08195912 6.08355315] [ 2.5 7.5 ] [ 6.25 3.75 ] [ 6.25 3.75 ] [ 2.5 7.5 ] [ 6.25 3.75 ] [ 2.5 3.75 ] [ 6.25 3.75 ] [ 6.25 3.75 ] [ 6.25 3.75 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [ 6.25 3.75 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [-1.25 3.75 ] [ 6.25 3.75 ] [-1.25 11.25 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [ 2.5 11.25 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [ 2.5 3.75 ] [ 2.5 3.75 ]] If I change the acq_opt_max_evals back to -1, or switch the acquisition function to ts then everything works well: Evaluated inputs: [[ 6.48048264e+00 5.16685860e+00] [-1.20743781e-01 9.70861310e+00] [ 4.26112145e+00 7.47955916e+00] [-4.72670657e+00 7.80306038e+00] [ 3.07030981e+00 4.05256585e+00] [-1.86899555e+00 1.08101134e+01] [ 8.96877227e+00 1.21185092e+01] [-3.45598584e+00 1.09148145e+00] [ 2.12922567e+00 1.64593026e+00] [ 8.41270012e+00 1.42543831e+01] [ 2.90100098e+00 2.57904053e+00] [-4.99999999e+00 1.50000000e+01] [-3.59375000e+00 1.47656250e+01] [-4.99999821e+00 1.49999991e+01] [ 2.96875000e+00 2.34375000e+00] [-3.50889206e+00 1.27881145e+01] [-5.00000000e+00 1.50000000e+01] [ 2.94311523e+00 2.12219238e+00] [ 2.99827576e+00 2.81227112e+00] [ 3.17812009e+00 1.87500000e+00] [ 2.86293507e+00 2.60182142e+00] [ 2.92676700e+00 2.69622803e+00] [-1.25000000e+00 3.75000000e+00] [ 6.25000000e+00 3.75000000e+00] [ 6.25000000e+00 3.75000000e+00] [ 9.53125000e+00 3.28125000e+00] [-1.25000000e+00 1.12500000e+01] [ 2.50000000e+00 3.75000000e+00] [ 6.25000000e+00 1.03125000e+01] [ 1.00000000e+01 9.32647705e+00] [-3.30993652e+00 1.12481689e+01] [-1.25000000e+00 1.03125000e+01] [ 2.50000000e+00 8.57666016e+00] [-4.91210938e+00 1.06201172e+01] [-4.27455902e+00 1.06264114e+01] [-3.12500000e+00 1.31250000e+01] [ 2.50000000e+00 5.62500000e+00] [-3.45642112e+00 1.40625002e+01] [-2.10847855e-01 6.19105339e+00] [-2.10876493e-01 6.19166850e+00] [ 9.98535156e+00 7.49267578e+00] [-3.45626198e+00 1.40625002e+01]] Could anyone help me about this issue? Did I miss anything for parallel evaluation?

    opened by ZhenhuaLiu2512 2
  • Constraints do not work in code

    Constraints do not work in code

    I took the code in examples/synthetic/hartmann6_4/in_code_demo.py and modified it to include the simple constraint from examples/synthetic/hartmann3_constrained/config.json

    I basically just added this in place of the old config_params:

    constraints = {
      "constraint_1": {
          "name": "quadrant",
          "constraint": "np.linalg.norm(x[0:2]) <= 0.5"
        }
    }
    config_params = {'domain': domain_vars, 'fidel_space': fidel_vars,
                       'fidel_to_opt': fidel_to_opt, 'domain_constraints': constraints}
    

    I get this error when I try to run it:

    Traceback (most recent call last):
      File "in_code_demo.py", line 58, in <module>
        main()
      File "in_code_demo.py", line 39, in main
        config = load_config(config_params)
      File "/home/chris/projects/dragonfly/dragonfly/exd/cp_domain_utils.py", line 102, in load_config
        domain_constraints=domain_constraints, domain_info=domain_info, *args, **kwargs)
      File "/home/chris/projects/dragonfly/dragonfly/exd/cp_domain_utils.py", line 262, in load_domain_from_params
        cp_domain = domains.CartesianProductDomain(list_of_domains, domain_info)
      File "/home/chris/projects/dragonfly/dragonfly/exd/domains.py", line 365, in __init__
        self._constraint_eval_set_up()
      File "/home/chris/projects/dragonfly/dragonfly/exd/domains.py", line 377, in _constraint_eval_set_up
        isinstance(self.domain_constraints[idx][1], str) and
    KeyError: 0
    

    I assume this should work. The same constraints work when calling with dragonfly-script.

    I did a bit of digging and it seems like the constraints are not getting processed when loaded with load_config.

    opened by crcollins 2
  • Could not import fortran direct library

    Could not import fortran direct library

    I installed with:

    pip.exe install git+https://github.com/dragonfly/dragonfly.git
    

    It appeared to install successfully:

    Successfully installed dragonfly-0.0.0 future-0.17.1
    

    When I try to run the example, I have some import warnings, and then the example function does not appear to make the correct result.

    >>> from dragonfly import minimise_function
    Could not import Python optimal transport library. May not be required for your application.
    Could not import fortran direct library.
    >>> min_val, min_pt, history = minimise_function(lambda x: x ** 4 - x**2 + 0.1 * x, [[-10, 10]], 10);
    Hyper-parameters for Algorithm -------------------------------------------------
      acq                              default
      acq_opt_max_evals                -1
      acq_opt_method                   default
      acq_probs                        adaptive
      add_group_size_criterion         sampled
      add_grouping_criterion           randomised_ml
      add_max_group_size               6
      build_new_model_every            17
      capital_type                     return_value
      esp_kernel_type                  se
      esp_matern_nu                    -1.0
      esp_order                        -1
      euc_init_method                  latin_hc
      get_initial_qinfos               None
      gpb_hp_tune_criterion            ml-post_sampling
      gpb_hp_tune_probs                0.3-0.7
      gpb_ml_hp_tune_opt               default
      gpb_post_hp_tune_burn            -1
      gpb_post_hp_tune_method          slice
      gpb_post_hp_tune_offset          25
      handle_non_psd_kernels           guaranteed_psd
      handle_parallel                  halluc
      hp_tune_criterion                ml
      hp_tune_max_evals                -1
      hp_tune_probs                    uniform
      init_capital                     default
      init_capital_frac                None
      init_method                      rand
      kernel_type                      default
      matern_nu                        -1.0
      max_num_steps                    10000000.0
      mean_func_const                  0.0
      mean_func_type                   tune
      mf_strategy                      boca
      ml_hp_tune_opt                   default
      mode                             asy
      next_pt_std_thresh               0.005
      noise_var_label                  0.05
      noise_var_type                   tune
      noise_var_value                  0.1
      num_groups_per_group_size        -1
      num_init_evals                   20
      perturb_thresh                   0.0001
      poly_order                       1
      post_hp_tune_burn                -1
      post_hp_tune_method              slice
      post_hp_tune_offset              25
      prev_evaluations                 None
      rand_exp_sampling_replace        False
      report_results_every             13
      shrink_kernel_with_time          0
      track_every_time_step            0
      use_additive_gp                  False
      use_same_bandwidth               False
      use_same_scalings                False
    Capital spent on initialisation: 5.0000(0.5000).
    C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\dragonfly\utils\oper_utils.py:127: UserWarning: Attempted to use direct, but fortran library could not be imported. Using PDOO optimiser instead of direct.
      warn(report_str)
    asy-bo(ei-ucb-ttei-add_ucb) (011/012) cap=1.100::  best_val=(e-0.000, t-0.000), acqs=[3 1 1 1],
    >>> min_val
    2.4868995751597324e-15
    >>> min_pt
    array([  2.48689958e-14])
    

    Python version (Windows!): Python 3.6.2 |Anaconda, Inc.| (default, Sep 19 2017, 08:03:39) [MSC v.1900 64 bit (AMD64)] on win32

    Pip version: 19.0.2

    Numpy version: 1.13.1

    opened by bayesfactor 2
  • Combine multi-objective and multi-fidelity optimization

    Combine multi-objective and multi-fidelity optimization

    Can dragonfly be used to optimize a multi-objective function using multi-fidelity? For example, see this reference for an implementation in botorch.

    More generally, what are its advantages over botorch?

    opened by berceanu 0
  • TypeError during multi-objective minimization

    TypeError during multi-objective minimization

    Currently getting the attached error during multi-objective minimization, but this does not occur if the same input is evaluated with multi-objective maximization. Any idea what could be causing this issue and how I can address it?

    thanks. Screen Shot 2022-03-10 at 11 44 27 AM

    opened by peraltae 0
  • Domain Parameterization Error in domain.

    Domain Parameterization Error in domain.

    Thank you for the wonderful framework. I am working on a MOO with BO as the optimizer. I am defining the domain using a dictionary and incorporating the constraints in it. However when I run the code without the constraint, it works fine. In case of constraint, it gives this warning and tries to initially sample which kind of goes in an infinite loop. I am sending the snippet of the issue facing MicrosoftTeams-image .

    opened by danial-amin 0
  • Integer optimization works on Mac but not on Windows.

    Integer optimization works on Mac but not on Windows.

    I am running an integer optimization on Mac and have decided to run some optimization on a Windows computer as well. This example works just fine on my Mac. I have the exact same Python version and the exact same package versions installed on a Windows as on my Mac. The example does not run on Windows. The issue I believe is the domain. For integer optimization from the example I am using the domain: domain_vars = [ {'type': 'int', 'min': 0, 'max': 114} ] If I change 'int' to 'float' the code runs on the Windows. The specific error I am getting is

    experiment_caller.py", line 148, in _get_true_val_from_experiment_at_point assert self.domain.is_a_member(point) AssertionError

    Obviously there is some domain issue going on, but I can't figure out why my code will run fine on my Mac, but does not run on Windows unless I change integer optimization to floating point optimization.

    opened by LeviManring 0
  • Time comparison between ask tell mode and maximise function

    Time comparison between ask tell mode and maximise function

    Hi, I have lots of interest in Dragonfly. When I did Ask tell mode with branin function, total time for max_capital=150 was 2 min ~ 3 min. On the other hand, when I use maximise function, the total time of 150 iteration was 20 minutes.

    What is the difference between ask tell mode and just maximise function? (I want to know the difference in process).

    opened by Choihojun 0
Owner
null
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

eXtreme Gradient Boosting Community | Documentation | Resources | Contributors | Release Notes XGBoost is an optimized distributed gradient boosting l

Distributed (Deep) Machine Learning Community 23.6k Jan 3, 2023
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

CatBoost 6.9k Jan 5, 2023
STUMPY is a powerful and scalable Python library for computing a Matrix Profile, which can be used for a variety of time series data mining tasks

STUMPY STUMPY is a powerful and scalable library that efficiently computes something called the matrix profile, which can be used for a variety of tim

TD Ameritrade 2.5k Jan 6, 2023
mlpack: a scalable C++ machine learning library --

a fast, flexible machine learning library Home | Documentation | Doxygen | Community | Help | IRC Chat Download: current stable version (3.4.2) mlpack

mlpack 4.2k Jan 1, 2023
Open source time series library for Python

PyFlux PyFlux is an open source time series library for Python. The library has a good array of modern time series models, as well as a flexible array

Ross Taylor 2k Jan 2, 2023
Empyrial is a Python-based open-source quantitative investment library dedicated to financial institutions and retail investors

By Investors, For Investors. Want to read this in Chinese? Click here Empyrial is a Python-based open-source quantitative investment library dedicated

Santosh 640 Dec 31, 2022
SageMaker Python SDK is an open source library for training and deploying machine learning models on Amazon SageMaker.

SageMaker Python SDK SageMaker Python SDK is an open source library for training and deploying machine learning models on Amazon SageMaker. With the S

Amazon Web Services 1.8k Jan 1, 2023
An open-source library of algorithms to analyse time series in GPU and CPU.

An open-source library of algorithms to analyse time series in GPU and CPU.

Shapelets 216 Dec 30, 2022
Nixtla is an open-source time series forecasting library.

Nixtla Nixtla is an open-source time series forecasting library. We are helping data scientists and developers to have access to open source state-of-

Nixtla 401 Jan 8, 2023
Pytools is an open source library containing general machine learning and visualisation utilities for reuse

pytools is an open source library containing general machine learning and visualisation utilities for reuse, including: Basic tools for API developmen

BCG Gamma 26 Nov 6, 2022
ArviZ is a Python package for exploratory analysis of Bayesian models

ArviZ (pronounced "AR-vees") is a Python package for exploratory analysis of Bayesian models. Includes functions for posterior analysis, data storage, model checking, comparison and diagnostics

ArviZ 1.3k Jan 5, 2023
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Jan 8, 2023
UpliftML: A Python Package for Scalable Uplift Modeling

UpliftML is a Python package for scalable unconstrained and constrained uplift modeling from experimental data. To accommodate working with big data, the package uses PySpark and H2O models as base learners for the uplift models. Evaluation functions expect a PySpark dataframe as input.

Booking.com 254 Dec 31, 2022
Bayesian optimization in JAX

Bayesian optimization in JAX

Predictive Intelligence Lab 26 May 11, 2022
Combines Bayesian analyses from many datasets.

PosteriorStacker Combines Bayesian analyses from many datasets. Introduction Method Tutorial Output plot and files Introduction Fitting a model to a d

Johannes Buchner 19 Feb 13, 2022
Bonsai: Gradient Boosted Trees + Bayesian Optimization

Bonsai is a wrapper for the XGBoost and Catboost model training pipelines that leverages Bayesian optimization for computationally efficient hyperparameter tuning.

null 24 Oct 27, 2022
Case studies with Bayesian methods

Case studies with Bayesian methods

Baze Petrushev 8 Nov 26, 2022
Fourier-Bayesian estimation of stochastic volatility models

fourier-bayesian-sv-estimation Fourier-Bayesian estimation of stochastic volatility models Code used to run the numerical examples of "Bayesian Approa

null 15 Jun 20, 2022
BASTA: The BAyesian STellar Algorithm

BASTA: BAyesian STellar Algorithm Current stable version: v1.0 Important note: BASTA is developed for Python 3.8, but Python 3.7 should work as well.

BASTA team 16 Nov 15, 2022