tree-math: mathematical operations for JAX pytrees

Overview

tree-math: mathematical operations for JAX pytrees

tree-math makes it easy to implement numerical algorithms that work on JAX pytrees, such as iterative methods for optimization and equation solving. It does so by providing a wrapper class tree_math.Vector that defines array operations such as infix arithmetic and dot-products on pytrees as if they were vectors.

Why tree-math

In a library like SciPy, numerical algorithms are typically written to handle fixed-rank arrays, e.g., scipy.integrate.solve_ivp requires inputs of shape (n,). This is convenient for implementors of numerical methods, but not for users, because 1d arrays are typically not the best way to keep track of state for non-trivial functions (e.g., neural networks or PDE solvers).

tree-math provides an alternative to flattening and unflattening these more complex data structures ("pytrees") for use in numerical algorithms. Instead, the numerical algorithm itself can be written in way to handle arbitrary collections of arrays stored in pytrees. This avoids unnecessary memory copies, and gives the user more control over the memory layouts used in computation. In practice, this can often makes a big difference for computational efficiency as well, which is why support for flexible data structures is so prevalent inside libraries that use JAX.

Installation

tree-math is implemented in pure Python, and only depends upon JAX.

You can install it from PyPI: pip install tree-math.

User guide

tree-math is simple to use. Just pass arbitrary pytree objects into tree_math.Vector to create an a object that arithmetic as if all leaves of the pytree were flattened and concatenated together:

>>> import tree_math as tm
>>> import jax.numpy as jnp
>>> v = tm.Vector({'x': 1, 'y': jnp.arange(2, 4)})
>>> v
tree_math.Vector({'x': 1, 'y': DeviceArray([2, 3], dtype=int32)})
>>> v + 1
tree_math.Vector({'x': 2, 'y': DeviceArray([3, 4], dtype=int32)})
>>> v.sum()
DeviceArray(6, dtype=int32)

You can also find a few functions defined on vectors in tree_math.numpy, which implements a very restricted subset of jax.numpy. If you're interested in more functionality, please open an issue to discuss before sending a pull request. (In the long term, this separate module might disappear if we can support Vector objects directly inside jax.numpy.)

Vector objects are pytrees themselves, which means the are compatible with JAX transformations like jit, vmap and grad, and control flow like while_loop and cond.

When you're done manipulating vectors, you can pull out the underlying pytrees from the .tree property:

>>> v.tree
{'x': 1, 'y': DeviceArray([2, 3], dtype=int32)}

As an alternative to manipulating Vector objects directly, you can also use the functional transformations wrap and unwrap (see the "Example usage" below).

One important difference between tree_math and jax.numpy is that dot products in tree_math default to full precision on all platforms, rather than defaulting to bfloat16 precision on TPUs. This is useful for writing most numerical algorithms, and will likely be JAX's default behavior in the future.

In the near-term, we also plan to add a Matrix class that will make it possible to use tree-math for numerical algorithms such as L-BFGS which use matrices to represent stacks of vectors.

Example usage

Here is how we could write the preconditioned conjugate gradient method. Notice how similar the implementation is to the pseudocode from Wikipedia, unlike the implementation in JAX:

atol2) & (k < maxiter) def body_fun(value): x, r, gamma, p, k = value Ap = A(p) alpha = gamma / (p.conj() @ Ap) x_ = x + alpha * p r_ = r - alpha * Ap z_ = M(r_) gamma_ = r_.conj() @ z_ beta_ = gamma_ / gamma p_ = z_ + beta_ * p return x_, r_, gamma_, p_, k + 1 r0 = b - A(x0) p0 = z0 = M(r0) gamma0 = r0 @ z0 initial_value = (x0, r0, gamma0, p0, 0) x_final, *_ = lax.while_loop(cond_fun, body_fun, initial_value) return x_final">
import functools
from jax import lax
import tree_math as tm
import tree_math.numpy as tnp

@functools.partial(tm.wrap, vector_argnames=['b', 'x0'])
def cg(A, b, x0, M=lambda x: x, maxiter=5, tol=1e-5, atol=0.0):
  """jax.scipy.sparse.linalg.cg, written with tree_math."""
  A = tm.unwrap(A)
  M = tm.unwrap(M)

  atol2 = tnp.maximum(tol**2 * (b @ b), atol**2)

  def cond_fun(value):
    x, r, gamma, p, k = value
    return (r @ r > atol2) & (k < maxiter)

  def body_fun(value):
    x, r, gamma, p, k = value
    Ap = A(p)
    alpha = gamma / (p.conj() @ Ap)
    x_ = x + alpha * p
    r_ = r - alpha * Ap
    z_ = M(r_)
    gamma_ = r_.conj() @ z_
    beta_ = gamma_ / gamma
    p_ = z_ + beta_ * p
    return x_, r_, gamma_, p_, k + 1

  r0 = b - A(x0)
  p0 = z0 = M(r0)
  gamma0 = r0 @ z0
  initial_value = (x0, r0, gamma0, p0, 0)

  x_final, *_ = lax.while_loop(cond_fun, body_fun, initial_value)
  return x_final
Comments
  • VectorMixin

    VectorMixin

    Hey! Continuing the discussion about VectorMixin for pytree classes, I see two ways to implement this:

    1. VectorMixin inherits from Vector and overrides the tree property to return self. Roughly implemented as:
    class VectorMixin(tm.Vector):
        """A mixin class for vector operations that works with any pytree class"""
    
        @property
        def tree(self):
            return self
    
    1. VectorMixin does not inherit from Vector, instead of using .tree it assumes self is a pytree and implements all the operators based on this. Vector inherits from VectorMixin and implements the pytree protocol roughly as:
    @tree_util.register_pytree_node_class
    class Vector(VectorMixin):
      """A wrapper for treating an arbitrary pytree as a 1D vector."""
    
      def __init__(self, tree):
        self._tree = tree
    
      def tree_flatten(self):
        return (self._tree,), None
    
      @classmethod
      def tree_unflatten(cls, _, args):
        return cls(*args)
      
      ...
    

    First option is the easiest, second requires a refactor but its more in the spirit of what a mixin "should be". WDYT?

    opened by cgarciae 5
  • Unexpected input type for array?

    Unexpected input type for array?

    I'm not sure if I'm using this right, but here's what I have:

    from tree_math import Vector
    from typing import Any, Callable, TypeVar
    from jax.scipy.optimize import minimize
    
    T = TypeVar('T')
    
    
    def tree_minimize(fun: Callable[[T], RealNumeric], x0: T,
                      *,
                      method: str,
                      tol: float | None = None,
                      options: None | Mapping[str, Any] = None) -> T:
        wrapped = Vector(x0)
        def wrapped_fun(x: Vector) -> RealNumeric:
            return fun(x.tree)
        optimize_result = minimize(wrapped_fun, wrapped, method=method, tol=tol, options=options)
        return optimize_result.x.tree  # type: ignore[attr-defined]
    

    When I call this, I get:

      File "/home/neil/.cache/pypoetry/virtualenvs/cmm-tspD8tmv-py3.10/lib/python3.10/site-packages/jax/_src/scipy/optimize/minimize.py", line 103, in minimize
        results = minimize_bfgs(fun_with_args, x0, **options)
      File "/home/neil/.cache/pypoetry/virtualenvs/cmm-tspD8tmv-py3.10/lib/python3.10/site-packages/jax/_src/scipy/optimize/bfgs.py", line 102, in minimize_bfgs
        converged=jnp.linalg.norm(g_0, ord=norm) < gtol,
      File "/home/neil/.cache/pypoetry/virtualenvs/cmm-tspD8tmv-py3.10/lib/python3.10/site-packages/jax/_src/numpy/linalg.py", line 438, in norm
        x, = _promote_dtypes_inexact(jnp.asarray(x))
      File "/home/neil/.cache/pypoetry/virtualenvs/cmm-tspD8tmv-py3.10/lib/python3.10/site-packages/jax/_src/numpy/lax_numpy.py", line 1924, in asarray
        return array(a, dtype=dtype, copy=False, order=order)
      File "/home/neil/.cache/pypoetry/virtualenvs/cmm-tspD8tmv-py3.10/lib/python3.10/site-packages/jax/_src/numpy/lax_numpy.py", line 1903, in array
        raise TypeError(f"Unexpected input type for array: {type(object)}")
    TypeError: Unexpected input type for array: <class 'tree_math._src.vector.Vector'>
    

    I thought Vector was supposed to pretend to be a jax.Array? Is it impossible to add a __jax_array__ method to tree_math.VectorMixin?

    opened by NeilGirdhar 2
  • Add VectorMixin base class

    Add VectorMixin base class

    As discussed in #3 this PR adds the VectorBase class which both contains the base vector logic and can act as a mixin that can be applied to any pytree class. The Vector class is now a simple pytree container that inherits from VectorBase.

    I've use VectorBase here as it reads better but we can still consider the VectorMixin option.

    Pending:

    • [x] Add a test to check that this works with classes other than Vector.
    pull ready 
    opened by cgarciae 2
  • Refactor navier_stokes_explicit_terms() out of semi_implicit_navier_stokes().

    Refactor navier_stokes_explicit_terms() out of semi_implicit_navier_stokes().

    Refactor navier_stokes_explicit_terms() out of semi_implicit_navier_stokes().

    Enables additional options for writing systems of equations that use explicit Navier-Stokes, i.e. by writing a custom state via tree_math.

    opened by copybara-service[bot] 1
  • `tm.unwrap`: Error when `out_vectors` is list [documentation]

    `tm.unwrap`: Error when `out_vectors` is list [documentation]

    import jax.numpy as jnp 
    import tree_math as tm
    
    def f(x, y):
      return x, y
      
    x = y = tm.Vector(jnp.array(0.))
    
    tm.unwrap(f, out_vectors = (True, False))(x, y)
    # (tree_math.Vector(DeviceArray(0., dtype=float32, weak_type=True)), DeviceArray(0., dtype=float32, weak_type=True))
    tm.unwrap(f, out_vectors = [True, False])(x, y)
    # ValueError: Expected list, got (DeviceArray(0., dtype=float32, weak_type=True), DeviceArray(0., dtype=float32, weak_type=True)).
    
    opened by deasmhumhna 1
  • import struct

    import struct

    How should I go about importing struct? Thanks!

    My attempt below fails --

    import tree_math
    
    @tree_math.struct
    class Point: 
        x: float 
        y: float 
    

    AttributeError: module 'tree_math' has no attribute 'struct'

    opened by pharringtonp19 0
  • How well does tree-math support computation on multiple devices?

    How well does tree-math support computation on multiple devices?

    Wondering how well tree-math supports computation on multiple devices?

    Let's say we have a pytree of tensors of different dimensions and want to perform some operations on each of them with tree-math, can we distribute those tasks to multiple devices (GPU, for instance)?

    opened by connection-on-fiber-bundles 1
  • Transform for defining dataclasses with VectorMixin like flax.struct

    Transform for defining dataclasses with VectorMixin like flax.struct

    It would be nice to have an easy way to define dataclasses that are also tree-math vectors.

    We could borrow the syntax of flax.struct here: https://flax.readthedocs.io/en/latest/flax.struct.html

    Example usage:

    from tree_math import struct
    
    @struct
    class FluidState:
      velocity_x: Array
      velocity_y: Array
      pressure: Array
    
    opened by shoyer 1
  • Operations should allow shape broadcasting

    Operations should allow shape broadcasting

    It seems like the current implementation doesn't allow broadcasting arguments. Here's an example for normalizing leafs.

    import tree_math as tm
    import jax
    import jax.numpy as jnp
    
    a = jnp.ones(10)
    b = jnp.ones(5)
    
    v = tm.Vector({'a': a, 'b': b})
    v / jax.tree_map(jnp.linalg.norm, v)
    

    returns the following error:

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    <ipython-input-19-c2a8ad9c2f8f> in <module>()
    ----> 1 v / jax.tree_map(jnp.linalg.norm, v)
    
    2 frames
    /usr/local/lib/python3.7/dist-packages/tree_math/_src/vector.py in wrapper(self, other)
         72   """Implement a forward binary method, e.g., __add__."""
         73   def wrapper(self, other):
    ---> 74     return broadcasting_map(func, self, other)
         75   wrapper.__name__ = f"__{name}__"
         76   return wrapper
    
    /usr/local/lib/python3.7/dist-packages/tree_math/_src/vector.py in broadcasting_map(func, *args)
         65   if not vector_args:
         66     return func2()  # result is a scalar
    ---> 67   _flatten_together(*[arg.tree for arg in vector_args])  # check shapes
         68   return tree_util.tree_map(func2, *vector_args)
         69 
    
    /usr/local/lib/python3.7/dist-packages/tree_math/_src/vector.py in _flatten_together(*args)
         37   if not all(shapes == all_shapes[0] for shapes in all_shapes[1:]):
         38     shapes_str = " vs ".join(map(str, all_shapes))
    ---> 39     raise ValueError(f"tree leaves have different array shapes: {shapes_str}")
         40 
         41   return all_values, all_treedefs[0]
    
    ValueError: tree leaves have different array shapes: [(10,), (5,)] vs [(), ()]
    
    opened by GeoffNN 5
Owner
Google
Google ❤️ Open Source
Google
Code for Graph-to-Tree Learning for Solving Math Word Problems (ACL 2020)

Graph-to-Tree Learning for Solving Math Word Problems PyTorch implementation of Graph based Math Word Problem solver described in our ACL 2020 paper G

Jipeng Zhang 66 Nov 23, 2022
Deep learning operations reinvented (for pytorch, tensorflow, jax and others)

This video in better quality. einops Flexible and powerful tensor operations for readable and reliable code. Supports numpy, pytorch, tensorflow, and

Alex Rogozhnikov 6.2k Jan 1, 2023
GAN JAX - A toy project to generate images from GANs with JAX

GAN JAX - A toy project to generate images from GANs with JAX This project aims to bring the power of JAX, a Python framework developped by Google and

Valentin Goldité 14 Nov 29, 2022
Mini-hmc-jax - A simple implementation of Hamiltonian Monte Carlo in JAX

mini-hmc-jax This is a simple implementation of Hamiltonian Monte Carlo in JAX t

Martin Marek 6 Mar 3, 2022
CLOOB training (JAX) and inference (JAX and PyTorch)

cloob-training Pretrained models There are two pretrained CLOOB models in this repo at the moment, a 16 epoch and a 32 epoch ViT-B/16 checkpoint train

Katherine Crowson 64 Nov 27, 2022
The MATH Dataset

Measuring Mathematical Problem Solving With the MATH Dataset This is the repository for Measuring Mathematical Problem Solving With the MATH Dataset b

Dan Hendrycks 267 Dec 26, 2022
MWPToolkit is a PyTorch-based toolkit for Math Word Problem (MWP) solving.

MWPToolkit is a PyTorch-based toolkit for Math Word Problem (MWP) solving. It is a comprehensive framework for research purpose that integrates popular MWP benchmark datasets and typical deep learning-based MWP algorithms.

null 119 Jan 4, 2023
Incomplete easy-to-use math solver and PDF generator.

Math Expert Let me do your work Preview preview.mp4 Introduction Math Expert is our (@salastro, @younis-tarek, @marawn-mogeb) math high school graduat

SalahDin Ahmed 22 Jul 11, 2022
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano <https:

null 9.6k Dec 31, 2022
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano <https:

null 9.6k Jan 6, 2023
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano <https:

null 9.3k Feb 12, 2021
An abstraction layer for mathematical optimization solvers.

MathOptInterface Documentation Build Status Social An abstraction layer for mathematical optimization solvers. Replaces MathProgBase. Citing MathOptIn

JuMP-dev 284 Jan 4, 2023
Source code, datasets and trained models for the paper Learning Advanced Mathematical Computations from Examples (ICLR 2021), by François Charton, Amaury Hayat (ENPC-Rutgers) and Guillaume Lample

Maths from examples - Learning advanced mathematical computations from examples This is the source code and data sets relevant to the paper Learning a

Facebook Research 171 Nov 23, 2022
NaturalProofs: Mathematical Theorem Proving in Natural Language

NaturalProofs: Mathematical Theorem Proving in Natural Language NaturalProofs: Mathematical Theorem Proving in Natural Language Sean Welleck, Jiacheng

Sean Welleck 83 Jan 5, 2023
A Python library created to assist programmers with complex mathematical functions

libmaths libmaths was created not only as a learning experience for me, but as a way to make mathematical models in seconds for Python users using mat

Simple 73 Oct 2, 2022
Framework that uses artificial intelligence applied to mathematical models to make predictions

LiconIA Framework that uses artificial intelligence applied to mathematical models to make predictions Interface Overview Table of contents [TOC] 1 Ar

null 4 Jun 20, 2021
1st Solution For ICDAR 2021 Competition on Mathematical Formula Detection

This project releases our 1st place solution on ICDAR 2021 Competition on Mathematical Formula Detection. We implement our solution based on MMDetection, which is an open source object detection toolbox based on PyTorch.

yuxzho 94 Dec 25, 2022
Official implementation for ICDAR 2021 paper "Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer"

Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer Description Convert offline handwritten mathematical expressi

Wenqi Zhao 87 Dec 27, 2022
Prototypical python implementation of the trust-region algorithm presented in Sequential Linearization Method for Bound-Constrained Mathematical Programs with Complementarity Constraints by Larson, Leyffer, Kirches, and Manns.

Prototypical python implementation of the trust-region algorithm presented in Sequential Linearization Method for Bound-Constrained Mathematical Programs with Complementarity Constraints by Larson, Leyffer, Kirches, and Manns.

null 3 Dec 2, 2022