Efficiently computes derivatives of numpy code.

Overview

Note: Autograd is still being maintained but is no longer actively developed. The main developers (Dougal Maclaurin, David Duvenaud, Matt Johnson, and Jamie Townsend) are now working on JAX, with Dougal and Matt working on it full-time. JAX combines a new version of Autograd with extra features such as jit compilation.

Autograd Test status asv

Autograd can automatically differentiate native Python and Numpy code. It can handle a large subset of Python's features, including loops, ifs, recursion and closures, and it can even take derivatives of derivatives of derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation), which means it can efficiently take gradients of scalar-valued functions with respect to array-valued arguments, as well as forward-mode differentiation, and the two can be composed arbitrarily. The main intended application of Autograd is gradient-based optimization. For more information, check out the tutorial and the examples directory.

Example use:

>>> import autograd.numpy as np  # Thinly-wrapped numpy
>>> from autograd import grad    # The only autograd function you may ever need
>>>
>>> def tanh(x):                 # Define a function
...     y = np.exp(-2.0 * x)
...     return (1.0 - y) / (1.0 + y)
...
>>> grad_tanh = grad(tanh)       # Obtain its gradient function
>>> grad_tanh(1.0)               # Evaluate the gradient at x = 1.0
0.41997434161402603
>>> (tanh(1.0001) - tanh(0.9999)) / 0.0002  # Compare to finite differences
0.41997434264973155

We can continue to differentiate as many times as we like, and use numpy's vectorization of scalar-valued functions across many different input values:

>>> from autograd import elementwise_grad as egrad  # for functions that vectorize over inputs
>>> import matplotlib.pyplot as plt
>>> x = np.linspace(-7, 7, 200)
>>> plt.plot(x, tanh(x),
...          x, egrad(tanh)(x),                                     # first  derivative
...          x, egrad(egrad(tanh))(x),                              # second derivative
...          x, egrad(egrad(egrad(tanh)))(x),                       # third  derivative
...          x, egrad(egrad(egrad(egrad(tanh))))(x),                # fourth derivative
...          x, egrad(egrad(egrad(egrad(egrad(tanh)))))(x),         # fifth  derivative
...          x, egrad(egrad(egrad(egrad(egrad(egrad(tanh))))))(x))  # sixth  derivative
>>> plt.show()

See the tanh example file for the code.

Documentation

You can find a tutorial here.

End-to-end examples

How to install

Just run pip install autograd

Authors

Autograd was written by Dougal Maclaurin, David Duvenaud, Matt Johnson, Jamie Townsend and many other contributors. The package is currently still being maintained, but is no longer actively developed. Please feel free to submit any bugs or feature requests. We'd also love to hear about your experiences with autograd in general. Drop us an email!

We want to thank Jasper Snoek and the rest of the HIPS group (led by Prof. Ryan P. Adams) for helpful contributions and advice; Barak Pearlmutter for foundational work on automatic differentiation and for guidance on our implementation; and Analog Devices Inc. (Lyric Labs) and Samsung Advanced Institute of Technology for their generous support.

Comments
  • Forward mode

    Forward mode

    This probably isn't ready to be pulled into the master branch yet, but I thought I'd submit a pr in case you want to track progress.

    TODO:

    • [x] Implement the rest of the numpy grads
    • [x] Tests for remaining grads
    • [x] Write a jacobian_vector_product convenience wrapper
    • [x] Update the hessian_vector_product wrapper to use forward mode
    • [x] Ensure that nodes with only forward mode grads don't refer to their parents (so that garbage collection can work)
    • [ ] Implement a jacobian matrix product
    opened by j-towns 83
  • Documenting CuPy wrapper progress

    Documenting CuPy wrapper progress

    Starting this issue to document progress on wrapping CuPy.

    • [x] import autograd.cupy as cp
    • [x] instantiate arrays from scalars, lists, and tuples.
    cp.array(1)
    cp.array([1, 2])
    cp.array([1, 3]) + cp.array([1, 1])
    
    • [x] check that gradients work
    import autograd.cupy as cp
    from autograd import elementwise_grad as egrad
    
    def f(x):
        return cp.sin(x)
    
    def g(x):
        return x + 2
    
    df = egrad(f)
    dg = egrad(g)
    
    a = cp.array([1, 1])
    
    print(f(a))
    print(df(a))
    
    print(g(a))
    print(dg(a))
    
    
    • [x] Check that higher derivatives work.
    import autograd.cupy as cp
    from autograd import elementwise_grad as egrad
    import numpy as np
    
    a = cp.arange(-2 * np.pi, 2 * np.pi, 0.01)
    
    def sin(x):
        return cp.sin(x)
    
    dsin = egrad(sin)
    ddsin = egrad(dsin)
    
    sin(a)
    dsin(a)
    ddsin(a)
    
    • [ ] Fix ValueError: object __array__ method not producing an array.
    • [ ] Run tests for all of the CuPy wrapped functions.
    opened by ericmjl 25
  • Decreasing autograd memory usage

    Decreasing autograd memory usage

    I don't mean "memory leak" in terms of unreachable memory after the Python process quits, I mean memory that is being allocated in the backwards pass, when it should be being freed. I mentioned this problem in #199 , but I think it should be opened as an issue.

    For a simple function

    import autograd.numpy as np
    from autograd import grad
    
    def F(x,z):
        for i in range(100):
            z = np.dot(x,z)
        return np.sum(z)
    dF = grad(F)
    

    and a procedure to measure memory usage

    from memory_profiler import memory_usage
    def make_data():
        np.random.seed(0)
        D = 1000
        x = np.random.randn(D,D)
        x = np.dot(x,x.T)
        z = np.random.randn(D,D)
        return x,z
    
    def m():
        from time import sleep
        x,z = make_data()
        gx = dF(x,z)
        sleep(0.1)
        return gx
    
    mem_usage = np.array(memory_usage(m,interval=0.01))
    mem_usage -= mem_usage[0]
    

    and a manual gradient of the same function

    def grad_dot_A(g,A,B):
        ga = np.dot(g,B.T)
        ga = np.reshape(ga,np.shape(A))
        return ga
    
    def grad_dot_B(g,A,B):
        gb = np.dot(A.T,g)
        gb = np.reshape(gb, np.shape(B))
        return gb
    
    def dF(x, z):
        z_stack = []
        for i in list(range(100)):
            z_stack.append(z)
            z = np.dot(x, z)
        retval = np.sum(z)
    
        # Begin Backward Pass
        g_retval = 1
        g_x = 0
    
        # Reverse of: retval = np.sum(z)
        g_z = repeat_to_match_shape(g_retval, z)
        for i in reversed(list(range(100))):
    
            # Reverse of: z = np.dot(x, z)
            z = z_stack.pop()
            tmp_g0 = grad_dot_A(g_z, x, z)
            tmp_g1 = grad_dot_B(g_z, x, z)
            g_z = 0
            g_x += tmp_g0
            g_z += tmp_g1
        return g_x
    

    I get the following memory usage profile:

    image

    If I replace the dot gradient with the ones used in the manual code, I get the same memory profile, nothing improves.

    If I replace the dot product with element-wise multiply, I get a different memory profile, but still not what I would expect:

    image

    I would love to help figure this out, but I'm not sure where to start. First thing is of course to document the problem.

    opened by alexbw 21
  • Memory issue?

    Memory issue?

    I've run into an issue with large matrices and memory. There seem to be two problems:

    1. Memory isn't being released on successive calls of grad. e.g.
    a = 10000
    b = 10000
    A = np.random.randn(a)
    B = np.random.randn(b)
    
    def fn(x):
        M = A[:, na] + x[na, :]
        return M[0, 0]
    
    g = grad(fn)
    
    for i in range(100):
        g(B)
    

    is ramping up memory on each iteration.

    1. Memory isn't being released during the backwards pass e.g.
    k = 10
    def fn(x):
        res = 0
        for i in range(k):
            res = res + np.sum(x)
        return res
    g = grad(fn)
    b = 200000
    g(np.random.randn(b))
    

    This seems to scale in memory (for each call) as O(k), which don't think is the desired behaviour. For b=150000 this effect does not happen, however.

    opened by hughsalimbeni 16
  • Experimental reorganization

    Experimental reorganization

    This is mostly just a cosmetic reorganization. The main motivation was to expose a well-defined API for extending Autograd in a single module, extend.py. The only functions/classes that we should need for wrapping a numerical libary are primitive/defvjp*/defjvp* for defining new primitive functions, Box/VSpace for defining new types, and SparseObject. In practice, we also use functions like vspace, getval and isbox but we should try to avoid them.

    This PR includes the commits from #293, so we need to be happy with the performance before merging with dev-1.2. I made a small optimization to defvjp which might help.

    While we're renaming things, I wouldn't mind finally changing our JVP/VJP convention to something more obvious. JO/JTO? fwd_op/rev_op?

    opened by dougalm 14
  • Experiment: combo VJPs

    Experiment: combo VJPs

    I'm not recommending we merge this yet! It's just an experiment for now, and I'm opening this PR to track progress.

    Check out the change to backward_pass. It seems like a good idea to allow users to write VJP functions that evaluate the VJP wrt multiple positional arguments simultaneously, mainly because that can allow for work sharing (instead of always having separate calls).

    However, the implementation mechanism here seems to hurt performance a lot:

       before     after       ratio
      [fb7eccf6] [c163e986]
    +  170.82μs   266.18μs      1.56  bench_core.time_long_backward_pass
    +  536.46μs   827.32μs      1.54  bench_core.time_long_grad
    +    5.69μs     8.66μs      1.52  bench_core.time_exp_primitive_call_boxed
    +  312.14μs   446.58μs      1.43  bench_core.time_long_forward_pass
    +     2.02y      2.87y      1.42  bench_rnn.RNNSuite.peakmem_manual_rnn_grad
    +  129.84μs   178.60μs      1.38  bench_numpy_vjps.time_tensordot_1_1
    +   10.75μs    14.28μs      1.33  bench_core.time_short_backward_pass
    +  101.26μs   133.52μs      1.32  bench_numpy_vjps.time_tensordot_0_0
    +  274.72ms   349.04ms      1.27  bench_core.time_fan_out_fan_in_forward_pass
    +   67.22μs    82.32μs      1.22  bench_numpy_vjps.time_tensordot_0
    +   64.15μs    78.47μs      1.22  bench_numpy_vjps.time_dot_0
    +  447.36ms   545.34ms      1.22  bench_core.time_fan_out_fan_in_grad
    +  116.58μs   136.20μs      1.17  bench_numpy_vjps.time_dot_1_2
    +  126.47μs   143.47μs      1.13  bench_numpy_vjps.time_tensordot_1_2
    +  281.75ms   319.34ms      1.13  bench_core.time_fan_out_fan_in_backward_pass
    +  127.47μs   144.33μs      1.13  bench_numpy_vjps.time_tensordot_1_0
    +  118.83μs   134.47μs      1.13  bench_numpy_vjps.time_dot_1_0
    +   23.82μs    26.71μs      1.12  bench_core.time_short_forward_pass
    +  121.13μs   135.21μs      1.12  bench_numpy_vjps.time_dot_1_1
    +  102.82μs   114.14μs      1.11  bench_numpy_vjps.time_tensordot_0_2
    +  102.39μs   113.46μs      1.11  bench_numpy_vjps.time_tensordot_0_1
    +     1.93y      2.13y      1.11  bench_rnn.RNNSuite.peakmem_rnn_grad
    +   69.91μs    77.02μs      1.10  bench_numpy_vjps.time_tensordot_1
    -     2.67s      2.23s      0.83  bench_rnn.RNNSuite.time_rnn_grad
    
    SOME BENCHMARKS HAVE CHANGED SIGNIFICANTLY.
    
    opened by mattjj 14
  • Simplify util.flatten

    Simplify util.flatten

    Use the vspace flatten functionality for util.flatten. This enables flattening of complex values which wasn't previously possible.

    Am I missing some reason why this isn't ok? The only difference in functionality which I can see is that with this change, calling flatten on scalars will give an unflatten which returns scalar values wrapped in an array. In particular:

    On master

    In [1]: from autograd.util import flatten
    
    In [2]: v, unflatten = flatten(3.)
    
    In [3]: v
    Out[3]: array([ 3.])
    
    In [4]: unflatten(v)
    Out[4]: 3.0
    

    With this change:

    In [1]: from autograd.util import flatten
    
    In [2]: v, unflatten = flatten(3.)
    
    In [3]: v
    Out[3]: array([ 3.])
    
    In [4]: unflatten(v)
    Out[4]: array(3.0)
    

    Is this a big deal? This type of behaviour occurs in other situations where vspace is applied to scalars, for example:

    In [7]: from autograd import grad
    
    In [8]: def f(x):
       ...:     return 3.
       ...:
    
    In [9]: grad(f)(2.)
    /Users/jamietownsend/dev/autograd/autograd/core.py:16: UserWarning: Output seems independent of input.
      warnings.warn("Output seems independent of input.")
    Out[9]: array(0.0)
    

    and I don't think that's really a problem.

    opened by j-towns 14
  • Dynd support

    Dynd support

    Dynd is a next generation array library for python, with lots of cool features like JIT compilation, heterogenous data, user defined data types, missing data support, type checking etc https://speakerdeck.com/izaid/dynd

    Wonder if there is interests from either HIPS or dynd devs @izaid , @insertinterestingnamehere or @mwiebe in integrated this with Dynd (or atleast leaving in hooks for future funcitonality or cooperation later on?)

    Both are cool libraries and would hate to have the python community having to choose between them for projects.

    enhancement 
    opened by datnamer 14
  • Inconsistent handling of complex functions

    Inconsistent handling of complex functions

    My complex analysis is a bit rusty, and I'm getting confused by the handling of complex functions. Many functions are differentiable as complex functions (i.e. they are holomorphic) and their complex derivatives are implemented in autograd.

    However there also seem to be functions, like abs, var, angle and others, which are not differentiable as complex functions, but they also have derivatives implemented. I'm assuming these derivatives treat the complex inputs as if they were 2D real valued vectors? This seems inconsistent to me.

    Users can fairly easily replicate the second behaviour without these derivatives being defined, by manually decomposing their numbers into real and imaginary parts, so I would tentatively propose removing these pseudo-derivatives...

    Apologies if I'm making some dumb mistake here...

    opened by j-towns 13
  • Further correcting grad_eigh to support hermitian matrices and the UPLO kwarg properly

    Further correcting grad_eigh to support hermitian matrices and the UPLO kwarg properly

    Edit: as discussed in the comments below, the issue with the complex eigenvectors is the gauge, which is arbitrary. However, this updated code should work for complex-valued matrices and functions that do not depend on the gauge. So for example, the test for the complex case uses np.abs(v).

    What this update does:

    • fix the vjp computation for numpy.linalg.eigh in accordance with the behavior of the function, which always takes only the upper/lower part of the matrix
    • fix the tests to take random matrices as opposed to random symmetric matrices
    • fix the computation to work for Hermitian matrices as per this pull request, on which I've built

    However:

    • the gradient for Hermitian matrices works only for the eigenvalues and not (always) for the eigenvectors
    • so I've added a test, but I take a random complex matrix and check only the eigenvalue gradient flow
    • the problem with eigenvectors probably has to do with their being complex; this has not been dealt with anywhere that I looked: (pyTorch, TensorFlow, or the original reference https://people.maths.ox.ac.uk/gilesm/files/NA-08-01.pdf , where real eigenvectors are also assumed to be real in the tests)

    The gradient for the eigenvectors does not pass a general test. However, it works in some cases. For example, this code

    import autograd.numpy as npa
    from autograd import grad
    
    def fn(a):
        # Define an array with some random operations 
        mat = npa.array([[(1+1j)*a, 2, a], 
                        [1j*a, 2 + npa.abs(a + 1j), 1], 
                        [npa.conj(a), npa.exp(a), npa.abs(a)]])
        [eigs, vs] = npa.linalg.eigh(mat)
        return npa.abs(vs[0, 0])
    
    a = 2.1 + 1.1j # Some random test value
    
    # Compute the numerical gradient of fn(a)
    grad_num = (fn(a + 1e-5)-fn(a))/1e-5 - 1j*(fn(a + 1e-5*1j)-fn(a))/1e-5
    
    print('Autograd gradient:  ', grad(fn)(a))
    print('Numerical gradient: ', grad_num)
    print('Difference:         ', npa.linalg.norm(grad(fn)(a)-grad_num))
    

    returns a difference smaller than 1e-6 for any individual component of vs that is put in the return statement. However, it breaks for a more complicated function, e.g. return npa.abs(vs[0, 0] + vs[1, 1]).

    It would be great if someone can address this further. Still, for now this PR is a significant improvement in the behavior of the linalg.eigh function.

    opened by momchilmm 12
  • Calling `np.array(..., np.float64)` can fail

    Calling `np.array(..., np.float64)` can fail

    This works:

    import autograd.numpy as np
    from autograd import jacobian
    
    th = np.array([1., 2., 3., 4.])
    A = lambda th: [ [th[0], th[1]], [th[2], th[3]] ]
    B = lambda th: np.array(A(th), np.float64)
    jacobian(B)(th)
    

    But this does not:

    import autograd.numpy as np
    from autograd import jacobian
    
    th = np.array([1., 2., 3., 4])
    A = lambda th: [ [th[0], th[1]], [th[2], th[3]] ]
    B = lambda f: lambda th: np.array(f(th), np.float64)
    C = B(A)
    jacobian(C)(th)
    

    throwing AutogradHint: This error *might* be caused by assigning into arrays, which autograd doesn't support.

    I have a list of functions such as A() that return lists. What I want to do is to wrap each of them into a function such as B() so that I can get numpy array on return. Can I achieve this another way?

    opened by konstunn 12
  • Is it  possible to see gradient function?

    Is it possible to see gradient function?

    Hi, when I use autograd, it is possible to see its gradient function? Or in other words, it is possible to see derivative of that function? Or is it possible to see computational graph?

    For example, I want to see grad_tanh function

    import autograd.numpy as np  # Thinly-wrapped numpy
    from autograd import grad    # The only autograd function you may ever need
    
    def tanh(x):                 # Define a function
           y = np.exp(-2.0 * x)
           return (1.0 - y) / (1.0 + y)
    
    grad_tanh = grad(tanh)       # Obtain its gradient function
    

    Thank you

    opened by Samuel-Bachorik 0
  • Gradient become Nan for 0 value test

    Gradient become Nan for 0 value test

    The following code is not working: def loss(x): return np.linalg.norm(x) ** 2

    x = np.zeros([3]) a = grad(loss)(x) print(a)

    Error Message:

    /Users/.local/lib/python3.6/site-packages/autograd/numpy/linalg.py:100: RuntimeWarning: invalid value encountered in double_scalars return expand(g / ans) * x array([nan, nan, nan])

    opened by Yuhang-7 1
  • support for Jax-like custom forward pass definition?

    support for Jax-like custom forward pass definition?

    Is there a way to define a custom forward pass, like in jax, where one can output a residual that may be used by the backward pass?

    For example, is the following example (from the Jax docs) implementable in autograd?

    from jax import custom_vjp
    
    @custom_vjp
    def f(x, y):
      return jnp.sin(x) * y
    
    def f_fwd(x, y):
    # Returns primal output and residuals to be used in backward pass by f_bwd.
      return f(x, y), (jnp.cos(x), jnp.sin(x), y)
    
    def f_bwd(res, g):
      cos_x, sin_x, y = res # Gets residuals computed in f_fwd
      return (cos_x * g * y, sin_x * g)
    
    f.defvjp(f_fwd, f_bwd)
    
    opened by tylerflex 0
  • Evaluating a section of a jacobian

    Evaluating a section of a jacobian

    Let's say that I have some function f(x), where x is a vector and returns a vector. I can evaluate the Jacobian of this function fairly simply, as demonstrated below.

    from autograd import numpy as np
    import autograd as ag
    def f(x):
        return np.array([x.sum(),(x[:3]**2).sum(),np.log(np.exp(x).sum())])
    xtest=np.array([0,.5,.3,.2])
    print(f(xtest))
    
    print(ag.jacobian(f)(xtest))
    
    
    

    My question is if there's some way of evaluating only some columns of this jacobian. For example, let's say I only wanted the first and last columns of it. So far I haven't found any way of evaluating this more efficiently than just evaluating the whole jacobian and throwing some away. If anyone can help please let me know!

    opened by darcykenworthy 0
  • Bug when raising zero to powers?

    Bug when raising zero to powers?

    Consider these two seemingly equivalent functions:

    def fa(x):
        return x ** 1.5
    
    def fb(x):
        return x * x ** 0.5
    

    We can see that one of them is differentiated ok, while the other produces warnings and nans at zero:

    >>> [ga, gb] = map(autograd.grad, [fa, fb])
    >>> print(ga(2.), ga(0.))
    2.121320343559643 0.0
    >>> print(gb(2.), gb(0.))
    .../autograd/numpy/numpy_vjps.py:59: RuntimeWarning: divide by zero encountered in power
      lambda ans, x, y : unbroadcast_f(x, lambda g: g * y * x ** anp.where(y, y - 1, 1.)),
    .../autograd/numpy/numpy_vjps.py:59: RuntimeWarning: invalid value encountered in double_scalars
      lambda ans, x, y : unbroadcast_f(x, lambda g: g * y * x ** anp.where(y, y - 1, 1.)),
    2.121320343559643 nan
    

    I'm not sure whether this is a proper fix, but if in the defvjp call of anp.power we change anp.where(y, y - 1, 1.) to anp.where(x, anp.where(y, y - 1, 1.), 1.) then gb(0.) does produce the same result as ga(0.)

    I concede that 0 ** -0.5 and ZeroDivisionError in general is a delicate topic. But my fix still seems consistent to me. 🤷‍♂️

    opened by yairchu 0
  • ModuleNotFoundError

    ModuleNotFoundError

    Hi,I failed to run autograd's test cases due to ModuleNotFoundError

    image

    I tried to install all the requirements files but some packages were still missing. Can you complete the requirements file or give some other sulutions.

    Thanks for your help. Best, SmartPycg

    opened by SmartPycg 0
Releases(1.0)
Owner
Formerly: Harvard Intelligent Probabilistic Systems Group -- Now at Princeton
Ryan Adams' research group. Formerly at Harvard, now at Princeton. New Github repositories here: https://github.com/PrincetonLIPS
Formerly: Harvard Intelligent Probabilistic Systems Group -- Now at Princeton
A deep learning based semantic search platform that computes similarity scores between provided query and documents

semanticsearch This is a deep learning based semantic search platform that computes similarity scores between provided query and documents. Documents

null 1 Nov 30, 2021
Composable transformations of Python+NumPy programsComposable transformations of Python+NumPy programs

Chex Chex is a library of utilities for helping to write reliable JAX code. This includes utils to help: Instrument your code (e.g. assertions) Debug

DeepMind 506 Jan 8, 2023
MLP-Numpy - A simple modular implementation of Multi Layer Perceptron in pure Numpy.

MLP-Numpy A simple modular implementation of Multi Layer Perceptron in pure Numpy. I used the Iris dataset from scikit-learn library for the experimen

Soroush Omranpour 1 Jan 1, 2022
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano <https:

null 9.6k Dec 31, 2022
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano <https:

null 9.6k Jan 6, 2023
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano <https:

null 9.3k Feb 12, 2021
A PyTorch implementation of Sharpness-Aware Minimization for Efficiently Improving Generalization

sam.pytorch A PyTorch implementation of Sharpness-Aware Minimization for Efficiently Improving Generalization ( Foret+2020) Paper, Official implementa

Ryuichiro Hataya 102 Dec 28, 2022
BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.

BitPack is a practical tool that can efficiently save quantized neural network models with mixed bitwidth.

Zhen Dong 36 Dec 2, 2022
CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image Segmentation

CoTr: Efficient 3D Medical Image Segmentation by bridging CNN and Transformer This is the official pytorch implementation of the CoTr: Paper: CoTr: Ef

null 218 Dec 25, 2022
Official implementation of "Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets" (CVPR2021)

Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets This is the official implementation of "Towards Good Pract

Sanja Fidler's Lab 52 Nov 22, 2022
Sharpness-Aware Minimization for Efficiently Improving Generalization

Sharpness-Aware-Minimization-TensorFlow This repository provides a minimal implementation of sharpness-aware minimization (SAM) (Sharpness-Aware Minim

Sayak Paul 54 Dec 8, 2022
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.

Knodle (Knowledge-supervised Deep Learning Framework) - a new framework for weak supervision with neural networks. It provides a modularization for se

null 93 Nov 6, 2022
An experimental technique for efficiently exploring neural architectures.

SMASH: One-Shot Model Architecture Search through HyperNetworks An experimental technique for efficiently exploring neural architectures. This reposit

Andy Brock 478 Aug 4, 2022
Trash Sorter Extraordinaire is a software which efficiently detects the different types of waste in a pile of random trash through feeding it pictures or videos.

Trash-Sorter-Extraordinaire Trash Sorter Extraordinaire is a software which efficiently detects the different types of waste in a pile of random trash

Rameen Mahmood 1 Nov 7, 2021
Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently

Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently This repository is the official implementat

VITA 4 Dec 20, 2022
Rewrite ultralytics/yolov5 v6.0 opencv inference code based on numpy, no need to rely on pytorch

Rewrite ultralytics/yolov5 v6.0 opencv inference code based on numpy, no need to rely on pytorch; pre-processing and post-processing using numpy instead of pytroch.

炼丹去了 21 Dec 12, 2022
Crab is a flexible, fast recommender engine for Python that integrates classic information filtering recommendation algorithms in the world of scientific Python packages (numpy, scipy, matplotlib).

Crab - A Recommendation Engine library for Python Crab is a flexible, fast recommender engine for Python that integrates classic information filtering r

python-recsys 1.2k Dec 21, 2022
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Erik Linder-Norén 21.8k Jan 9, 2023
Machine learning, in numpy

numpy-ml Ever wish you had an inefficient but somewhat legible collection of machine learning algorithms implemented exclusively in NumPy? No? Install

David Bourgin 11.6k Dec 30, 2022