Adaptive: parallel active learning of mathematical functions

Overview

logo adaptive

PyPI Conda Downloads Pipeline status DOI Binder Gitter Documentation Coverage GitHub

Adaptive: parallel active learning of mathematical functions.

adaptive is an open-source Python library designed to make adaptive parallel function evaluation simple. With adaptive you just supply a function with its bounds, and it will be evaluated at the “best” points in parameter space, rather than unnecessarily computing all points on a dense grid. With just a few lines of code you can evaluate functions on a computing cluster, live-plot the data as it returns, and fine-tune the adaptive sampling algorithm.

adaptive shines on computations where each evaluation of the function takes at least ≈100ms due to the overhead of picking potentially interesting points.

Run the adaptive example notebook live on Binder to see examples of how to use adaptive or visit the tutorial on Read the Docs.

Implemented algorithms

The core concept in adaptive is that of a learner. A learner samples a function at the best places in its parameter space to get maximum “information” about the function. As it evaluates the function at more and more points in the parameter space, it gets a better idea of where the best places are to sample next.

Of course, what qualifies as the “best places” will depend on your application domain! adaptive makes some reasonable default choices, but the details of the adaptive sampling are completely customizable.

The following learners are implemented:

  • Learner1D, for 1D functions f: ℝ → ℝ^N,
  • Learner2D, for 2D functions f: ℝ^2 → ℝ^N,
  • LearnerND, for ND functions f: ℝ^N → ℝ^M,
  • AverageLearner, for random variables where you want to average the result over many evaluations,
  • AverageLearner1D, for stochastic 1D functions where you want to estimate the mean value of the function at each point,
  • IntegratorLearner, for when you want to intergrate a 1D function f: ℝ → ℝ.
  • BalancingLearner, for when you want to run several learners at once, selecting the “best” one each time you get more points.

Meta-learners (to be used with other learners):

  • BalancingLearner, for when you want to run several learners at once, selecting the “best” one each time you get more points,
  • DataSaver, for when your function doesn't just return a scalar or a vector.

In addition to the learners, adaptive also provides primitives for running the sampling across several cores and even several machines, with built-in support for concurrent.futures, mpi4py, loky, ipyparallel and distributed.

Examples

Adaptively learning a 1D function (the gif below) and live-plotting the process in a Jupyter notebook is as easy as

from adaptive import notebook_extension, Runner, Learner1D
notebook_extension()

def peak(x, a=0.01):
    return x + a**2 / (a**2 + x**2)

learner = Learner1D(peak, bounds=(-1, 1))
runner = Runner(learner, goal=lambda l: l.loss() < 0.01)
runner.live_info()
runner.live_plot()

Installation

adaptive works with Python 3.7 and higher on Linux, Windows, or Mac, and provides optional extensions for working with the Jupyter/IPython Notebook.

The recommended way to install adaptive is using conda:

conda install -c conda-forge adaptive

adaptive is also available on PyPI:

pip install adaptive[notebook]

The [notebook] above will also install the optional dependencies for running adaptive inside a Jupyter notebook.

To use Adaptive in Jupyterlab, you need to install the following labextensions.

jupyter labextension install @jupyter-widgets/jupyterlab-manager
jupyter labextension install @pyviz/jupyterlab_pyviz

Development

Clone the repository and run setup.py develop to add a link to the cloned repo into your Python path:

git clone [email protected]:python-adaptive/adaptive.git
cd adaptive
python3 setup.py develop

We highly recommend using a Conda environment or a virtualenv to manage the versions of your installed packages while working on adaptive.

In order to not pollute the history with the output of the notebooks, please setup the git filter by executing

python ipynb_filter.py

in the repository.

We implement several other checks in order to maintain a consistent code style. We do this using pre-commit, execute

pre-commit install

in the repository.

Citing

If you used Adaptive in a scientific work, please cite it as follows.

@misc{Nijholt2019,
  doi = {10.5281/zenodo.1182437},
  author = {Bas Nijholt and Joseph Weston and Jorn Hoofwijk and Anton Akhmerov},
  title = {\textit{Adaptive}: parallel active learning of mathematical functions},
  publisher = {Zenodo},
  year = {2019}
}

Credits

We would like to give credits to the following people:

  • Pedro Gonnet for his implementation of CQUAD, “Algorithm 4” as described in “Increasing the Reliability of Adaptive Quadrature Using Explicit Interpolants”, P. Gonnet, ACM Transactions on Mathematical Software, 37 (3), art. no. 26, 2010.
  • Pauli Virtanen for his AdaptiveTriSampling script (no longer available online since SciPy Central went down) which served as inspiration for the ~adaptive.Learner2D.

For general discussion, we have a Gitter chat channel. If you find any bugs or have any feature suggestions please file a GitHub issue or submit a pull request.

Comments
  • Runner fails in the notebook

    Runner fails in the notebook

    Hi.

    Thank you very much for sharing your project!

    I have attempted to run some tutorials on my machine(s), OS: Windows 10 | Python 3.8.5, but the runner fails to execute. For example: image.

    Please note that when I tried running the examples on Binder, everything worked smoothly. Can you please help me resolve the issue because I would like to run a set of my experiments?

    Thank you very much for your attention!

    opened by ncuxomun 15
  • add support for complex numbers in Learner1D

    add support for complex numbers in Learner1D

    Description

    I ran into an issue interpolating scattering matrices using adaptive where Learner1D.tell casts complex data to real. The reasoning for this is not immediately apparent so I made a small change that seems to work fine in this case. This is by no means a complete implementation but it seems to function fine with the loss function.

    Type of change

    Check relevant option(s).

    • [x] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [ ] This change requires a documentation update
    opened by stuartthomas25 12
  • Discontinuities in zero should be detected and be approximated with some margin

    Discontinuities in zero should be detected and be approximated with some margin

    (original issue on GitLab)

    opened by Jorn Hoofwijk (@Jorn) at 2018-04-18T12:53:09.861Z

    If you have a discontinuity in your function around x=0 where the step of the discontinuity is larger than the desired, the runner will approximate the step really really close (point can get as close as 1.04e-322 in the sample below).

    Sample case:

    import adaptive
    import time
    adaptive.notebook_extension()
    
    def f(x):
        time.sleep(0.1)
        return 1 if x>0 else -1
    
    l = adaptive.Learner1D(f, (-1, 1))
    r = adaptive.Runner(l, goal=lambda l:l.loss() < 0.05)
    r.live_info()
    r.live_plot(update_interval=0.1)
    

    Somehow it seems that discontinuities in other points are more or less detected, possibly this has something to do with floating point accuracy. I think that source of the difference is that around zero a float can be really small due to the exponent just getting more negative (ie 5.0e-200 can be easily stored in a float) while around a non-zero number, the float has much less accuracy. (ie 1.00000001 can be stored in a float, but (1 + 1e-50) will result in 1)

    opened by basnijholt 11
  • Make SequenceLearner points hashable by passing the sequence to the function.

    Make SequenceLearner points hashable by passing the sequence to the function.

    Description

    With this change, the function of the SequenceLearner contains the sequence.

    This is needed because points that are passed to the Runner need to be hashable, and this is not required by the entries of the sequence in SequenceLearner.

    Fixes https://github.com/python-adaptive/adaptive/issues/265

    Checklist

    • [x] Fixed style issues using pre-commit run --all (first install using pip install pre-commit)
    • [x] pytest passed

    Type of change

    Check relevant option(s).

    • [x] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [ ] This change requires a documentation update
    enhancement 
    opened by basnijholt 10
  • make learners picklable

    make learners picklable

    Description

    I realized that in some cases it's very useful to pickle learners. For example to send it over the network when parallelizing code.

    With these changes, most learners become picklable.

    Checklist

    • [x] Fixed style issues using pre-commit run --all (first install using pip install pre-commit)
    • [ ] pytest passed

    Type of change

    Check relevant option(s).

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [ ] This change requires a documentation update
    enhancement 
    opened by basnijholt 10
  • Optimize circumsphere and triangulation.py

    Optimize circumsphere and triangulation.py

    To speed up circumsphere, I directly called numpy's determinant function, as fast_det defaults to this case for >= 4 dimensions, which is always the case for circumsphere. I also directly took the norm, as this again defaults to sqrt(np.dot(vec * vec)) in fast_det for 4 dimensions. I also removed np.delete in favour of masked indices, which means that numpy doesn't need to allocate new arrays for each determinant.

    For circumsphere, the speedup is about 33% on a four dimensional sphere (1.35x faster):

    (50000 runs on arbitrary coordinates)

    opt  circumsphere: 8.907416s
    nopt circumsphere: 13.325046s
    opt  circumsphere: 7.633841s
    nopt circumsphere: 12.151468s
    

    For a five dimensional sphere, the difference is more obvious, at around 50% (2x faster): (50000 runs on arbitrary coordinates)

    opt  circumsphere: 11.039010s
    nopt circumsphere: 19.448841s
    opt  circumsphere: 10.594215s
    nopt circumsphere: 18.614510s
    

    I also added general optimizations by directly importing all numpy functions - for hot loops, this means that Python doesn't have to look up the function repeatedly, which can save a fair amount of time.

    opened by philippeitis 10
  • can't pickle lru_cache function with loky

    can't pickle lru_cache function with loky

    The following fails:

    from functools import lru_cache
    import adaptive
    adaptive.notebook_extension()
    
    @lru_cache
    def g(x):
        return x
    
    def f(x):
        return g(x)
    
    learner = adaptive.SequenceLearner(f, range(2))
    runner = adaptive.Runner(learner, adaptive.SequenceLearner.done)
    
    runner.live_info()
    

    Related to loky issue: https://github.com/joblib/loky/issues/268

    This worked fine with the concurrent.futures.ProcessPoolExecutor.

    bug Blocked 
    opened by basnijholt 9
  • Make learnerND datastructures immutable where possible

    Make learnerND datastructures immutable where possible

    (original issue on GitLab)

    opened by Joseph Weston (@jbweston) at 2018-07-10T13:16:25.518Z

    At the moment there are a few datastructures that are described as "sets". where possible we should make these frozenset to remove the possibility of them being modified.

    LearnerND refactor priority: low 
    opened by basnijholt 9
  • AverageLearner1D added

    AverageLearner1D added

    The AverageLearner1D has been added to the master branch (original code with design details and tests can be found here). Tutorial notebook added (tutorial_averagelearner1d_aux.py contains some auxiliary functions for the tutorial).

    Changes in existing files

    The init.py files of adaptive/ and learner/, and notebook_integration.py were only modified to include the AverageLearner1D.

    opened by AlvaroGI 8
  • add a changelog

    add a changelog

    We move fast and break things. For everyone's sanity it would be good to keep a changelog where we document what stuff we added, fixed and changed.

    TODO

    • [x] Add changelog entries for the previous releases (using the release notes)
    • [ ] Add changelog entry for the unreleased changes
    • [ ] Add the sphinx versionadded directive to relevant docstrings so that we know when stuff was added.
    opened by jbweston 8
  • ensure atomic writes when saving a file

    ensure atomic writes when saving a file

    Right now, if a program crashes in the middle of saving, you lose all your data. This ensures that the old file is first moved, then the new file is saved, and only then the old file is removed.

    This is not a hypothetical scenario but happens every day for some people that I work with ATM.

    opened by basnijholt 8
  • WIP: Add AsyncRunner.block()

    WIP: Add AsyncRunner.block()

    Description

    Allow to block the AsyncRunnner until it is complete.

    Unfortunately, this requires nest_asyncio because https://github.com/python/cpython/issues/93462.

    Checklist

    • [ ] Fixed style issues using pre-commit run --all (first install using pip install pre-commit)
    • [ ] pytest passed

    Type of change

    Check relevant option(s).

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [ ] (Code) style fix or documentation update
    • [ ] This change requires a documentation update
    opened by basnijholt 0
  • WIP: Add learner1D.all_intervals_between

    WIP: Add learner1D.all_intervals_between

    Description

    Please include a summary of the change and which (if so) issue is fixed.

    Fixes #(ISSUE_NUMBER_HERE)

    Checklist

    • [ ] Fixed style issues using pre-commit run --all (first install using pip install pre-commit)
    • [ ] pytest passed

    Type of change

    Check relevant option(s).

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [ ] (Code) style fix or documentation update
    • [ ] This change requires a documentation update
    opened by basnijholt 1
  • WIP: use sphinx_autodoc_typehints

    WIP: use sphinx_autodoc_typehints

    Description

    Please include a summary of the change and which (if so) issue is fixed.

    Fixes #(ISSUE_NUMBER_HERE)

    Checklist

    • [ ] Fixed style issues using pre-commit run --all (first install using pip install pre-commit)
    • [ ] pytest passed

    Type of change

    Check relevant option(s).

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [ ] (Code) style fix or documentation update
    • [ ] This change requires a documentation update
    opened by basnijholt 1
  • WIP: Add Runner callbacks and option to cancel points

    WIP: Add Runner callbacks and option to cancel points

    Description

    cc @tlaeven

    This allows to add periodic callbacks to the Runner and adds a way to cancel points that are currently being calculated.

    import adaptive
    
    adaptive.notebook_extension()
    
    
    def depletion_voltage(x):
        """Calculates a depletion voltage curve.
        
        Whenever the result is below 0, we know that for all points
        to the left of that, the result is also zero.
        """
        from time import sleep
        from random import random
    
        sleep(random())
    
        if x < -0.3:
            # Make some negative points really slow
            sleep(10)
    
        return max(0, x)
    
    
    learner = adaptive.Learner1D(depletion_voltage, bounds=(-1, 1))
    runner = adaptive.Runner(learner, npoints_goal=50)
    
    
    def cancel_depleted_futures(runner):
        if not runner.learner.data:
            return
        zeros = [x for x, y in sorted(learner.data.items()) if y <= 0]
        if not zeros:
            return
        for fut, x in runner.pending_points:
            if x < zeros[-1]:
                print(f"cancelling {x=} because f(x={zeros[-1]})=0")
                runner.cancel_point(fut)
    
    
    runner.start_periodic_callback(cancel_depleted_futures, 3)
    runner.live_info()
    

    Checklist

    • [ ] Fixed style issues using pre-commit run --all (first install using pip install pre-commit)
    • [ ] pytest passed

    Type of change

    Check relevant option(s).

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [ ] (Code) style fix or documentation update
    • [ ] This change requires a documentation update
    opened by basnijholt 1
  • Add type-hints to LearnerND

    Add type-hints to LearnerND

    Description

    Please include a summary of the change and which (if so) issue is fixed.

    Fixes #(ISSUE_NUMBER_HERE)

    Checklist

    • [ ] Fixed style issues using pre-commit run --all (first install using pip install pre-commit)
    • [ ] pytest passed

    Type of change

    Check relevant option(s).

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [ ] (Code) style fix or documentation update
    • [ ] This change requires a documentation update
    opened by basnijholt 1
Releases(v0.15.1)
  • v0.15.1(Dec 2, 2022)

    What's Changed

    • Use default executor when Runner(..., executor=None) by @basnijholt in https://github.com/python-adaptive/adaptive/pull/389

    Full Changelog: https://github.com/python-adaptive/adaptive/compare/v0.15.0...v0.15.1

    Source code(tar.gz)
    Source code(zip)
  • v0.15.0(Dec 2, 2022)

    What's Changed

    • Rename master -> main by @basnijholt in https://github.com/python-adaptive/adaptive/pull/384
    • Add docs section about executing coroutines by @juandaanieel in https://github.com/python-adaptive/adaptive/pull/364
    • Add loss_goal, npoints_goal, and an auto_goal function and use it in the runners by @basnijholt in https://github.com/python-adaptive/adaptive/pull/382
    • Add type-hints to Runner by @basnijholt in https://github.com/python-adaptive/adaptive/pull/370
    • Add support for Python 3.11 and test on it by @basnijholt in https://github.com/python-adaptive/adaptive/pull/387
    • Update CHANGELOG for 0.15.0 release by @basnijholt in https://github.com/python-adaptive/adaptive/pull/388

    Full Changelog: https://github.com/python-adaptive/adaptive/compare/v0.14.2...v0.15.0

    Source code(tar.gz)
    Source code(zip)
  • v0.14.2(Oct 14, 2022)

    What's Changed

    • Typehint SequenceLearner by @basnijholt in https://github.com/python-adaptive/adaptive/pull/366
    • Optionally run tests with pandas by @basnijholt in https://github.com/python-adaptive/adaptive/pull/369
    • Type hint IntegratorLearner by @basnijholt in https://github.com/python-adaptive/adaptive/pull/372
    • Add type-hints to BalancingLearner by @basnijholt in https://github.com/python-adaptive/adaptive/pull/371
    • Add type-hints to DataSaver by @basnijholt in https://github.com/python-adaptive/adaptive/pull/373
    • Add type-hints to tests and misc by @basnijholt in https://github.com/python-adaptive/adaptive/pull/378
    • Use numbers module from stdlib as type by @basnijholt in https://github.com/python-adaptive/adaptive/pull/379
    • Add type-hints to Learner2D by @basnijholt in https://github.com/python-adaptive/adaptive/pull/375
    • Avoid unnecessary iteration in SequenceLearner by @jbweston in https://github.com/python-adaptive/adaptive/pull/380

    Full Changelog: https://github.com/python-adaptive/adaptive/compare/v0.14.1...v0.14.2

    Source code(tar.gz)
    Source code(zip)
  • v0.14.1(Oct 11, 2022)

    What's Changed

    • Add Learner.new() method that returns an empty copy of the learner by @basnijholt in https://github.com/python-adaptive/adaptive/pull/365
    • Use typing.TypeAlias by @basnijholt in https://github.com/python-adaptive/adaptive/pull/367

    Full Changelog: https://github.com/python-adaptive/adaptive/compare/v0.14.0...v0.14.1

    Source code(tar.gz)
    Source code(zip)
  • v0.14.0(Oct 5, 2022)

    What's Changed

    • Fix class name issue with modern versions of dask.distributed by @maiani in https://github.com/python-adaptive/adaptive/pull/351
    • Replacing atomicwrites with os.write by @juandaanieel in https://github.com/python-adaptive/adaptive/pull/353
    • Remove scipy deprecation warnings by @eendebakpt in https://github.com/python-adaptive/adaptive/pull/354
    • Docs in Markdown with Myst and change tutorials to Jupytext notebooks by @basnijholt in https://github.com/python-adaptive/adaptive/pull/355
    • Add transparent logo in WebM format (for dark mode) by @basnijholt in https://github.com/python-adaptive/adaptive/pull/356
    • Update pre-commit versions by @basnijholt in https://github.com/python-adaptive/adaptive/pull/359
    • Add getting learner's data as pandas.DataFrame; add learner.to_dataframe method by @basnijholt in https://github.com/python-adaptive/adaptive/pull/358
    • Allow to periodically save with any function by @basnijholt in https://github.com/python-adaptive/adaptive/pull/362
    • Release 0.14 by @basnijholt in https://github.com/python-adaptive/adaptive/pull/363

    New Contributors

    • @maiani made their first contribution in https://github.com/python-adaptive/adaptive/pull/351
    • @juandaanieel made their first contribution in https://github.com/python-adaptive/adaptive/pull/353
    • @eendebakpt made their first contribution in https://github.com/python-adaptive/adaptive/pull/354

    Full Changelog: https://github.com/python-adaptive/adaptive/compare/v0.13.2...v0.14.0

    Source code(tar.gz)
    Source code(zip)
  • v0.13.2(May 31, 2022)

    What's Changed

    • Update pre-commit filters versions by @basnijholt in https://github.com/python-adaptive/adaptive/pull/345
    • use 'from future import annotations' by @basnijholt in https://github.com/python-adaptive/adaptive/pull/346
    • Switch from Tox to Nox by @basnijholt in https://github.com/python-adaptive/adaptive/pull/347
    • Skip ipyparallel test on MacOS by @basnijholt in https://github.com/python-adaptive/adaptive/pull/349
    • set loop to None in Python 3.10 by @basnijholt in https://github.com/python-adaptive/adaptive/pull/348
    • Run separate typeguard job (because it is much slower) by @basnijholt in https://github.com/python-adaptive/adaptive/pull/350

    Full Changelog: https://github.com/python-adaptive/adaptive/compare/v0.13.1...v0.13.2

    Source code(tar.gz)
    Source code(zip)
  • v0.13.1(Jan 25, 2022)

    What's Changed

    • take out a cut from the 3D sphere, LearnerND example by @basnijholt in https://github.com/python-adaptive/adaptive/pull/327
    • Documentation conda environment update to latest versions by @basnijholt in https://github.com/python-adaptive/adaptive/pull/328
    • Splits up documentations page into "algo+examples" and rest by @basnijholt in https://github.com/python-adaptive/adaptive/pull/330
    • Add an animated logo that shows the working of Adaptive by @basnijholt in https://github.com/python-adaptive/adaptive/pull/329
    • fix 'asyncio.Task.current_task' -> 'asyncio.current_task' by @basnijholt in https://github.com/python-adaptive/adaptive/pull/331
    • Learner1D: return inf loss when the bounds aren't done by @basnijholt in https://github.com/python-adaptive/adaptive/pull/271
    • use jupyter-sphinx instead of custom Sphinx directive by @basnijholt in https://github.com/python-adaptive/adaptive/pull/332
    • pin scikit-learn to 0.24.2, because of https://github.com/scikit-optimize/scikit-optimize/issues/1059 by @basnijholt in https://github.com/python-adaptive/adaptive/pull/333
    • rename usage_examples -> gallery by @basnijholt in https://github.com/python-adaptive/adaptive/pull/334
    • add a code example to the examples page by @basnijholt in https://github.com/python-adaptive/adaptive/pull/335
    • fix tutorial about using loky.get_reusable_executor on Windows by @basnijholt in https://github.com/python-adaptive/adaptive/pull/336
    • Fix NaN issue for Learner1D R -> R^n by @Davide-sd in https://github.com/python-adaptive/adaptive/pull/340

    New Contributors

    • @Davide-sd made their first contribution in https://github.com/python-adaptive/adaptive/pull/340

    Full Changelog: https://github.com/python-adaptive/adaptive/compare/v0.13.0...v0.13.1

    Source code(tar.gz)
    Source code(zip)
  • v0.13.0(Sep 10, 2021)

    v0.13.0 (2021-09-10)

    Full Changelog

    Fixed bugs:

    • AverageLearner doesn't work with 0 mean #275
    • call self._process_futures on canceled futures when BlockingRunner is done #320 (basnijholt)
    • AverageLearner: fix zero mean #276 (basnijholt)

    Closed issues:

    • Runners should tell learner about remaining points at end of run #319
    • Cryptic error when importing lmfit #314
    • change CHANGELOG to KeepAChangelog format #306
    • jupyter notebook kernels dead after running "import adaptive" #298
    • Emphasis on when to use adaptive in docs #297
    • GPU acceleration #296

    Merged pull requests:

    Source code(tar.gz)
    Source code(zip)
  • v0.12.2(Mar 23, 2021)

  • v0.12.1(Mar 23, 2021)

  • v0.12.0(Mar 23, 2021)

    v0.12.0 (2021-03-23)

    Full Changelog

    Merged pull requests:

    Source code(tar.gz)
    Source code(zip)
  • v0.11.3(Mar 7, 2021)

    Full Changelog

    Fixed bugs:

    • can't pickle lru_cache function with loky #292

    Closed issues:

    • ProcessPoolExecutor behaviour on MacOS in interactive environment changed between Python versions #301
    • Runner fails in the notebook #299

    Merged pull requests:

    Source code(tar.gz)
    Source code(zip)
  • v0.11.2(Aug 7, 2020)

  • v0.11.1(Aug 7, 2020)

  • v0.11.0(May 20, 2020)

    Since 0.10.0 we fixed the following issues:

    • #273 add minimum number of points parameter to AverageLearner
    • #225 Error on windows: daemonic processes are not allowed to have children Runner bug
    • #267 Make Runner work with unhashable points Runner enhancement
    • #249 ipyparallel fails in Python 3.8 bug
    • #258 Release v0.10
    • #250 live_info looks is badly formatted in Jupyterlab
    • #233 SKOptLearner doesn't work for multi variate domain
    • #184 Time-based stop Runner enhancement
    • #206 Does not work with lambda functions

    and merged the following Pull requests:

    • #278 prevent ImportError due to scikit-optimize and sklearn incompatibility
    • #274 AverageLearner: implement min_npoints AverageLearner enhancement
    • #268 make the Runner work with unhashable points Runner enhancement
    • #264 make learners picklable
    • #270 minimally require ipyparallel 6.2.5
    • #261 fix docs build and pin pyviz_comms=0.7.2
    • #263 add support for loky
    • #245 Optimize circumsphere and triangulation.py

    and closed

    • #266 Make SequenceLearner points hashable by passing the sequence to the function.
    • #169 add building of test documentation of RTD
    • #262 test what happens in CI when trying to force-push
    Source code(tar.gz)
    Source code(zip)
  • v0.10.0(Jan 16, 2020)

    Since 0.9.0 we fixed the following issues:

    • #217 Command-line tool
    • #211 Defining inside main() in multiprocess will report error
    • #208 Inquiry on implementation of parallelism on the cluster
    • #207 PyYAML yaml.load(input) Deprecation
    • #203 jupyter-sphinx update Documentation enhancement
    • #199 jupyter-sphinx is pinned to non-existing branch

    and merged the following Pull requests:

    • #257 add instructions for installing labextensions for Jupyterlab
    • #255 MNT: add vscode config directory to .gitignore
    • #253 disable test of runner using distributed
    • #252 color the overhead between red and green
    • #251 improve the style of the live_info widget, closes #250
    • #247 use tox, closes #238
    • #246 add a Pull Request template
    • #241 rename learner.ipynb -> example-notebook.ipynb
    • #239 correct short description in setup.py
    • #237 Power up pre-commit
    • #235 add a section of "How to cite" Adaptive
    • #234 Fix SKOptLearner for multi variate domain (issue #233)
    • #229 add a time-base stopping criterion for runners
    • #224 update packages in tutorial's landing page
    • #222 add _RequireAttrsABCMeta and make the BaseLearner use it
    • #221 2D: add triangle_loss

    @akhmerov, you opted for a patch release, but we didn't put anything in stable. Since there are quite a lot of changes I think it warrants a new release. Also, we are below 1.0, so we can do anything we want 😄

    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(Oct 7, 2019)

    Since 0.8.0 we fixed the following issues:

    • #217 Command-line tool
    • #211 Defining inside main() in multiprocess will report error
    • #208 Inquiry on implementation of parallelism on the cluster
    • #207 PyYAML yaml.load(input) Deprecation
    • #203 jupyter-sphinx update Documentation enhancement
    • #199 jupyter-sphinx is pinned to non-existing branch

    and merged the following Pull requests:

    • #219 pass value_scale to the LearnerND's loss_per_simplex function
    • #209 remove MPI4PY_MAX_WORKERS where it's not used
    • #204 use jupyter_sphinx v0.2.0 from conda instead of my branch
    • #200 ensure atomic writes when saving a file
    • #193 Add a SequenceLearner
    • #188 BalancingLearner: add a "cycle" strategy, sampling the learners one by one
    • #202 Authors
    • #201 Update tutorial.parallelism.rst
    • #197 Add option to display a progress bar when loading a BalancingLearner
    • #195 don't treat the no data case differently in the Learner1D Learner1D
    • #194 pin everything in the docs/environment.yml file
    Source code(tar.gz)
    Source code(zip)
  • v0.8.0(May 7, 2019)

    Since 0.7.0 we fixed the following issues:

    • #7 suggested points lie outside of domain Learner2D
    • #39 What should learners do when fed the same point twice
    • #159 BalancingLearner puts all points in the first child-learner when asking for points with no data present
    • #148 Loading data file with no data results in an error for the BalancingLearner
    • #145 Returning np.nan breaks the 1D learner
    • #54 Make learnerND datastructures immutable where possible
    • gitlab:#134 Learner1D.load throws exception when file is empty
    • #166 live_plot broken with latest holoviews and bokeh
    • #156 Runner errors for Python 3.7 when done
    • #159 BalancingLearner puts all points in the first child-learner when asking for points with no data present
    • #171 default loss of LearnerND changed?
    • #163 Add a page to the documentation of papers where adaptive is used
    • #179 set python_requires in setup.py
    • #175 Underlying algorithm and MATLAB integration

    and merged the following Pull requests:

    • gitlab:!141: change the simplex_queue to a SortedKeyList
    • gitlab:!142: make methods private in the LearnerND, closes #54
    • #162 test flat bands in the LearnerND
    • #161 import Iterable and Sized from collections.abc
    • #160 Distribute first points in a BalancingLearner
    • #153 invoke conda directly in CI
    • #152 fix bug in curvature_loss Learner1D bug
    • #151 handle NaN losses and add a test, closes #145
    • #150 fix _get_data for the BalancingLearner
    • #149 handle empty data files when loading, closes #148
    • #147 remove _deepcopy_fix and depend on sortedcollections >= 1.1
    • #168 Temporarily fix docs
    • #167 fix live_plot
    • #164 do not force shutdown the executor in the cleanup
    • #172 LearnerND: change the required loss to 1e-3 because the loss definition changed
    • #177 use the repo code in docs execute
    • #176 do not inline the HoloViews JS
    • #174 add a gallery page of Adaptive uses in scientific works
    • #170 Add logo to the documentation
    • #180 use setup(..., python_requires='>=3.6'), closes #179
    • #182 2D: do not return points outside the bounds, closes #181 bug
    • #185 Add support for neighbours in loss computation in LearnerND
    • #186 renormalize the plots value axis on every update
    • #189 use pytest rather than py.test
    • #190 add support for mpi4py
    Source code(tar.gz)
    Source code(zip)
  • v0.7.5(Mar 19, 2019)

  • v0.7.3(Jan 29, 2019)

  • v0.7.0(Dec 19, 2018)

    Since 0.6.0 we fixed the following issues:

    • #122: Remove public 'fname' learner attribute
    • #119: (Learner1D) add possibility to use the direct neighbors in the loss
    • #114: (LearnerND) allow any convex hull as domain
    • #121: How to handle NaN?
    • #107: Make BaseRunner an abstract base class
    • #112: (LearnerND) add iso-surface plot feature
    • #56: Improve plotting for learners
    • #118: widgets don't show up on adaptive.readthedocs.io
    • #91: Set up documentation
    • #62: AverageLearner math domain error
    • #113: make BalancingLearner work with the live_plot
    • #111: (LearnerND) flat simplices are sometimes added on the surface of the triangulation
    • #103: (BalancingLearner) make new balancinglearner that looks at the total loss rather than loss improvement
    • #110: LearnerND triangulation incomplete
    • #127: Typo in documentation foradaptive.learner.learner2D.uniform_loss(ip)
    • #126: (Learner1D) improve time complexity
    • #104: Learner1D could in some situations return -inf as loss improvement, which would make balancinglearner never choose to improve
    • #128: (LearnerND) fix plotting of scaled domains
    • #78: (LearnerND) scale y-values

    and merged the following Merge Requests:

    • !131: Resolve "(Learner1D) add possibility to use the direct neighbors in the loss"
    • !137: adhere to PEP008 by using absolute imports
    • !135: test all the different loss functions in each test
    • !133: make 'fname' a parameter to 'save' and 'load' only
    • !136: build the Dockerimage used in CI
    • !134: change resolution_loss to a factory function
    • !118: add 'save' and 'load' to the learners and periodic saving to the Runner
    • !127: Resolve "(LearnerND) allow any convex hull as domain"
    • !130: save execution time on futures inside runners
    • !111: Resolve "Make BaseRunner an abstract base class"
    • !124: Resolve "(LearnerND) add iso-surface plot feature"
    • !108: exponentially decay message frequency in live_info
    • !129: add tutorials
    • !120: add documentation
    • !125: update to the latest miniver
    • !126: add check_whitespace
    • !123: add an option to plot a HoloMap with the BalancingLearner
    • !122: implement 'npoints' strategy for the 'BalancingLearner'
    • !119: (learnerND) no more (almost) flat simplices in the triangulation
    • !109: make a BalancingLearner strategy that compares the total loss rather than loss improvement
    • !117: Cache loss and display it in the live_info widget
    • !121: 2D: add loss that minimizes the area of the triangle in 3D
    • !139: Resolve "(Learner1D) improve time complexity"
    • !140: Resolve "(LearnerND) fix plotting of scaled domains"
    • !128: LearnerND scale output values before computing loss
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Oct 5, 2018)

    Since 0.5.0 we fixed the following issues:

    • #66: (refactor) learner.tell(x, None) might be renamed to learner.tell_pending(x)
    • #92: DeprecationWarning: sorted_dict.iloc is deprecated. Use SortedDict.keys() instead.
    • #94: Learner1D breaks if right bound is added before the left bound
    • #95: Learner1D's bound check algo in self.ask doesn't take self.data or self.pending_points
    • #96: Learner1D fails when function returns a list instead of a numpy.array
    • #97: Learner1D fails when a point (x, None) is added when x already exists
    • #98: Learner1D.ask breaks when adding points in some order
    • #99: Learner1D doesn't correctly set the interpolated loss when a point is added
    • #101: How should learners handle data that is outside of the domain
    • #102: No tests for the 'BalancingLearner'
    • #105: LearnerND fails for BalancingLearner test
    • #108: (BalancingLearner) loss is cached incorrectly
    • #109: Learner2D suggests same point twice

    and merged the following Merge Requests:

    • !93: add a release guide
    • !94: add runner.max_retries
    • !95: 1D: fix the rare case where the right boundary point exists before the left bound
    • !96: More efficient 'tell_many'
    • !97: Fix #97 and #98
    • !98: Resolve "DeprecationWarning: sorted_dict.iloc is deprecated. Use SortedDict.keys() instead."
    • !99: Resolve "Learner1D's bound check algo in self.ask doesn't take self.data or self.pending_points"
    • !100: Resolve "Learner1D doesn't correctly set the interpolated loss when a point is added"
    • !101: Resolve "Learner1D fails when function returns a list instead of a numpy.array"
    • !102: introduce 'runner.retries' and 'runner.raise_if_retries_exceeded'
    • !103: 2D: rename 'learner._interp' to 'learner.pending_points' as in other learners
    • !104: Make the AverageLearner only return new points ...
    • !105: move specific tests for a particular learner to separate files
    • !107: Introduce 'tell_pending' which replaces 'tell(x, None)'
    • !112: Resolve "LearnerND fails for BalancingLearner test"
    • !113: Resolve "(BalancingLearner) loss is cached incorrectly"
    • !114: update release guide to add a 'dev' tag on top of regular tags
    • !115: Resolve "How should learners handle data that is outside of the domain"
    • !116: 2D: fix #109

    New features

    • add learner.tell_pending which replaces learner.tell(x, None)
    • add error-handling with runner.retries and runner.raise_if_retries_exceeded
    • make learner.pending_points and runner.pending_points public API
    • rename learner.ask(n, add_data) -> learner.ask(n, tell_pending)
    • added the overhead method to the BlockingRunner
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Aug 31, 2018)

    • Introduce LearnerND (beta)
    • Add BalancingLearner.from_product (see learner.ipynb or doc-string for useage example)
    • runner.live_info() now shows the learner's efficiency
    • runner.task.print_stack() now displays the full traceback
    • Introduced learner.tell_many instead of having learner.tell figure out whether multiple points are added (#59)
    • Fixed a bug that occured when a Learner1D had extremely narrow features

    And more bugs, see https://github.com/python-adaptive/adaptive/compare/v0.4.1...v0.5.0

    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(May 24, 2018)

  • v0.2.1(Mar 3, 2018)

    The Learner2D could be left off in an inconsistent state if the learner's function errored before the bounds function values were present in learner.data.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Feb 23, 2018)

Tools for mathematical optimization region

Tools for mathematical optimization region

林景 15 Nov 30, 2022
A modular active learning framework for Python

Modular Active Learning framework for Python3 Page contents Introduction Active learning from bird's-eye view modAL in action From zero to one in a fe

modAL 1.9k Dec 31, 2022
Interactive Parallel Computing in Python

Interactive Parallel Computing with IPython ipyparallel is the new home of IPython.parallel. ipyparallel is a Python package and collection of CLI scr

IPython 2.3k Dec 30, 2022
Massively parallel self-organizing maps: accelerate training on multicore CPUs, GPUs, and clusters

Somoclu Somoclu is a massively parallel implementation of self-organizing maps. It exploits multicore CPUs, it is able to rely on MPI for distributing

Peter Wittek 239 Nov 10, 2022
monolish: MONOlithic Liner equation Solvers for Highly-parallel architecture

monolish is a linear equation solver library that monolithically fuses variable data type, matrix structures, matrix data format, vendor specific data transfer APIs, and vendor specific numerical algebra libraries.

RICOS Co. Ltd. 179 Dec 21, 2022
Dual Adaptive Sampling for Machine Learning Interatomic potential.

DAS Dual Adaptive Sampling for Machine Learning Interatomic potential. How to cite If you use this code in your research, please cite this using: Hong

null 6 Jul 6, 2022
Implementations of Machine Learning models, Regularizers, Optimizers and different Cost functions.

Linear Models Implementations of LinearRegression, LassoRegression and RidgeRegression with appropriate Regularizers and Optimizers. Linear Regression

Keivan Ipchi Hagh 1 Nov 22, 2021
A basic Ray Tracer that exploits numpy arrays and functions to work fast.

Python-Fast-Raytracer A basic Ray Tracer that exploits numpy arrays and functions to work fast. The code is written keeping as much readability as pos

Rafael de la Fuente 393 Dec 27, 2022
A repository of PyBullet utility functions for robotic motion planning, manipulation planning, and task and motion planning

pybullet-planning (previously ss-pybullet) A repository of PyBullet utility functions for robotic motion planning, manipulation planning, and task and

Caelan Garrett 260 Dec 27, 2022
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Epistasis Lab at UPenn 8.9k Jan 9, 2023
Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Python Extreme Learning Machine (ELM) Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Augusto Almeida 84 Nov 25, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques

Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

Vowpal Wabbit 8.1k Dec 30, 2022
Microsoft contributing libraries, tools, recipes, sample codes and workshop contents for machine learning & deep learning.

Microsoft contributing libraries, tools, recipes, sample codes and workshop contents for machine learning & deep learning.

Microsoft 366 Jan 3, 2023
A data preprocessing package for time series data. Design for machine learning and deep learning.

A data preprocessing package for time series data. Design for machine learning and deep learning.

Allen Chiang 152 Jan 7, 2023
A mindmap summarising Machine Learning concepts, from Data Analysis to Deep Learning.

A mindmap summarising Machine Learning concepts, from Data Analysis to Deep Learning.

Daniel Formoso 5.7k Dec 30, 2022
A comprehensive repository containing 30+ notebooks on learning machine learning!

A comprehensive repository containing 30+ notebooks on learning machine learning!

Jean de Dieu Nyandwi 3.8k Jan 9, 2023
MIT-Machine Learning with Python–From Linear Models to Deep Learning

MIT-Machine Learning with Python–From Linear Models to Deep Learning | One of the 5 courses in MIT MicroMasters in Statistics & Data Science Welcome t

null 2 Aug 23, 2022
CD) in machine learning projectsImplementing continuous integration & delivery (CI/CD) in machine learning projects

CML with cloud compute This repository contains a sample project using CML with Terraform (via the cml-runner function) to launch an AWS EC2 instance

Iterative 19 Oct 3, 2022
Implemented four supervised learning Machine Learning algorithms

Implemented four supervised learning Machine Learning algorithms from an algorithmic family called Classification and Regression Trees (CARTs), details see README_Report.

Teng (Elijah)  Xue 0 Jan 31, 2022