A library for differentiable nonlinear optimization.

Overview

CircleCI License python 3.7, 3.8 pre-commit Code style: black

Theseus

A library for differentiable nonlinear optimization built on PyTorch to support constructing various problems in robotics and vision as end-to-end differentiable architectures.

The current focus is on nonlinear least squares with support for sparsity, batching, GPU, and backward modes for unrolling, truncated and implicit, and sampling based differentiation. This library is in beta with expected full release in mid 2022.

Getting Started

  • Prerequisites

    • We strongly recommend you install theseus in a venv or conda environment.
    • Theseus requires torch installation. To install for your particular CPU/CUDA configuration, follow the instructions in the PyTorch website.
  • Installing

    git clone https://github.com/facebookresearch/theseus.git && cd theseus
    pip install -e .
  • Running unit tests

    pytest theseus
  • See tutorials and examples to learn about the API and usage.

Additional Information

  • Use Github issues for questions, suggestions, and bugs.
  • See CONTRIBUTING if interested in helping out.
  • Theseus is being developed with the help of many contributors, see THANKS.

License

Theseus is MIT licensed. See the LICENSE for details.

Comments
  • Override Vector operators in Point2 and Point3

    Override Vector operators in Point2 and Point3

    Motivation and Context

    Overrides vector operations for Point2, Point3 #113

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    CLA Signed 
    opened by jeffin07 15
  • error and errorSquaredNorm can optionally take in variable data

    error and errorSquaredNorm can optionally take in variable data

    🚀 Feature

    API improvement in Objective: error and errorSquaredNorm can optionally take in var_data which if passed would call update internally. Document the behavior that if var_data is passed this will update the objective.

    Motivation

    Facilitates usage for cases where only error needs to be queried (w/o running optimization or even updating the variables).

    Pitch

    Following ways to use this api after:

    1. Get error on current internal values: call error without passing any var_data
    2. Get error on new values w\ update to objective: call error and pass var_data
    3. Get error on new values w\o update to objective: call error, pass var_data and True optional flag to not update objective
    enhancement good first issue 
    opened by luisenp 13
  • Allow constant inputs to cost functions to be passed as floats

    Allow constant inputs to cost functions to be passed as floats

    Motivation and Context

    To close #38

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [x] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [x] I have read the CONTRIBUTING document.
    • [x] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    CLA Signed refactor 
    opened by jeffin07 10
  • error and errorsquaredNorm optional data

    error and errorsquaredNorm optional data

    Motivation and Context

    API improvement in Objective: error and errorSquaredNorm can optionally take in var_data which if passed would call update internally. Document the behavior that if var_data is passed this will update the objective. #4

    How Has This Been Tested

    Testes using unit test and a custom example

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [x] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    enhancement CLA Signed 
    opened by jeffin07 10
  • Updating just `aux_vars` isn't sufficient to re-solve with some data changed

    Updating just `aux_vars` isn't sufficient to re-solve with some data changed

    🐛 Bug

    Using the quadratic fit example, I thought it would be reasonable to update the data in just aux_vars and re-solve, but it seems like there's a dependence on the global data_x.

    Steps to Reproduce

    I included a MWE below that outputs the incorrect solution in the middle here by just changing aux_inputs:

    optimal a:  tensor([[1.0076]], grad_fn=<AddBackward0>)
    == Only changing aux_vars["x"] (this should not be the same solution)
    optimal a:  tensor([[1.0076]], grad_fn=<AddBackward0>)
    == Globally updating data_x (this is the correct solution)
    optimal a:  tensor([[0.0524]], grad_fn=<AddBackward0>)
    

    Expected behavior

    I was pretty confused at first when my code using wasn't working and didn't realize it was because of this. We should make updating aux_inputs sufficient to re-solve the problem, or if this is challenging we should consider 1) raising a warning/adding a check with aux_inputs doesn't match or 2) remove duplicated passing of aux_inputs when it doesn't do anything.

    Code

    #!/usr/bin/env python3
    
    import torch
    import theseus as th
    import theseus.optimizer.nonlinear as thnl
    
    import numpy as np
    import numdifftools as nd
    
    def generate_data(num_points=10, a=1., b=0.5, noise_factor=0.01):
        data_x = torch.rand((1, num_points))
        noise = torch.randn((1, num_points)) * noise_factor
        data_y = a * data_x.square() + b + noise
        return data_x, data_y
    
    num_points = 10
    data_x, data_y = generate_data(num_points)
    
    x = th.Variable(data_x.requires_grad_(), name="x")
    y = th.Variable(data_y.requires_grad_(), name="y")
    a = th.Vector(1, name="a")
    b = th.Vector(1, name="b")
    
    def quad_error_fn(optim_vars, aux_vars):
        a, b = optim_vars
        x, y = aux_vars
        est = a.data * x.data.square() + b.data
        err = y.data - est
        return err
    
    optim_vars = a, b
    aux_vars = x, y
    cost_function = th.AutoDiffCostFunction(
        optim_vars, quad_error_fn, num_points, aux_vars=aux_vars, name="quadratic_cost_fn"
    )
    objective = th.Objective()
    objective.add(cost_function)
    optimizer = th.GaussNewton(
        objective,
        max_iterations=15,
        step_size=0.5,
    )
    
    theseus_inputs = {
    "a": 2 * torch.ones((1, 1)).requires_grad_(),
    "b": torch.ones((1, 1)).requires_grad_()
    }
    aux_vars = {
    "x": data_x,
    "y": data_y,
    }
    theseus_optim = th.TheseusLayer(optimizer)
    updated_inputs, info = theseus_optim.forward(
        theseus_inputs, aux_vars=aux_vars,
        track_best_solution=True, verbose=False,
        backward_mode=thnl.BackwardMode.FULL,
    )
    print('optimal a: ', updated_inputs['a'])
    
    aux_vars = {
    "x": data_x+10.,
    "y": data_y,
    }
    updated_inputs, info = theseus_optim.forward(
        theseus_inputs, aux_vars=aux_vars,
        track_best_solution=True, verbose=False,
        backward_mode=thnl.BackwardMode.FULL,
    )
    print('== Only changing aux_vars["x"] (this should not be the same solution)')
    print('optimal a: ', updated_inputs['a'])
    
    data_x.data += 10.
    aux_vars = {
    "x": data_x,
    "y": data_y,
    }
    updated_inputs, info = theseus_optim.forward(
        theseus_inputs, aux_vars=aux_vars,
        track_best_solution=True, verbose=False,
        backward_mode=thnl.BackwardMode.FULL,
    )
    print('== Globally updating data_x (this is the correct solution)')
    print('optimal a: ', updated_inputs['a'])
    
    documentation question refactor 
    opened by bamos 7
  • Refactored SE3.log_map_impl() to avoid in place operations

    Refactored SE3.log_map_impl() to avoid in place operations

    Fixes @exhaustin torch backward errors in this script.

    The script still has other errors, where the final system is not positive definite so it cannot be solved with CholeskyDense. But unclear if this is related to Lie groups or something else.

    bug CLA Signed 
    opened by luisenp 6
  • Installation in Docker : error in compilation of extlib/mat_mul.cu

    Installation in Docker : error in compilation of extlib/mat_mul.cu

    Hi ! Would it be possible to have a Dockerfile with the right config ? I've tried many compatible versions of pytorch and CUDA, but I always get the same error when building theseus-ai.. In my last trial, I started from nvidia ngc pytorch container nvcr.io/nvidia/pytorch:21.06-py3, which is an Ubuntu 20.04 with cuda 11.3, python 3.8 and I reinstalled torch==1.10.1+cu113 version.

    Here's the full error :

        /home/dir/theseus/theseus/extlib/mat_mult.cu(74): error: no instance of overloaded function "atomicAdd" matches the argument list
                    argument types are: (double *, double)
        /home/dir/theseus/theseus/extlib/mat_mult.cu(239): error: no instance of overloaded function "atomicAdd" matches the argument list
                    argument types are: (double *, double)
        2 errors detected in the compilation of "/home/dir/theseus/theseus/extlib/mat_mult.cu".
        ninja: build stopped: subcommand failed.
        Traceback (most recent call last):
          File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1717, in _run_ninja_build
            subprocess.run(
          File "/opt/conda/lib/python3.8/subprocess.py", line 516, in run
            raise CalledProcessError(retcode, process.args,
        subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
        The above exception was the direct cause of the following exception:
        Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/home/dir/theseus/setup.py", line 60, in <module>
            setuptools.setup(
          File "/opt/conda/lib/python3.8/site-packages/setuptools/__init__.py", line 163, in setup
            return distutils.core.setup(**attrs)
          File "/opt/conda/lib/python3.8/distutils/core.py", line 148, in setup
            dist.run_commands()
          File "/opt/conda/lib/python3.8/distutils/dist.py", line 966, in run_commands
            self.run_command(cmd)
          File "/opt/conda/lib/python3.8/distutils/dist.py", line 985, in run_command
            cmd_obj.run()
          File "/opt/conda/lib/python3.8/site-packages/setuptools/command/develop.py", line 38, in run
            self.install_for_development()
          File "/opt/conda/lib/python3.8/site-packages/setuptools/command/develop.py", line 140, in install_for_development
            self.run_command('build_ext')
          File "/opt/conda/lib/python3.8/distutils/cmd.py", line 313, in run_command
            self.distribution.run_command(command)
          File "/opt/conda/lib/python3.8/distutils/dist.py", line 985, in run_command
            cmd_obj.run()
          File "/opt/conda/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 87, in run
            _build_ext.run(self)
          File "/opt/conda/lib/python3.8/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
            _build_ext.build_ext.run(self)
          File "/opt/conda/lib/python3.8/distutils/command/build_ext.py", line 340, in run
            self.build_extensions()
          File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 735, in build_extensions
            build_ext.build_extensions(self)
          File "/opt/conda/lib/python3.8/site-packages/Cython/Distutils/old_build_ext.py", line 194, in build_extensions
            self.build_extension(ext)
          File "/opt/conda/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 208, in build_extension
            _build_ext.build_extension(self, ext)
          File "/opt/conda/lib/python3.8/distutils/command/build_ext.py", line 528, in build_extension
            objects = self.compiler.compile(sources,
          File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 556, in unix_wrap_ninja_compile
            _write_ninja_file_and_compile_objects(
          File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1399, in _write_ninja_file_and_compile_objects
            _run_ninja_build(
          File "/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1733, in _run_ninja_build
            raise RuntimeError(message) from e
        RuntimeError: Error compiling objects for extension
        ----------------------------------------
    ERROR: Command errored out with exit status 1: /opt/conda/bin/python3.8 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/home/dir/theseus/setup.py'"'"'; __file__='"'"'/home/dir/theseus/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps Check the logs for full command output.
    
    
    
    opened by fmagera 5
  • Homography example with functorch

    Homography example with functorch

    Motivation and Context

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    CLA Signed 
    opened by fantaosha 5
  • Add robust cost function

    Add robust cost function

    Motivation and Context

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    CLA Signed 
    opened by fantaosha 5
  • Add ManifoldGaussian class for messages in belief propagation

    Add ManifoldGaussian class for messages in belief propagation

    Motivation and Context

    It would be useful to have an optional covariance / precision matrix as part of the Manifold class as Gaussian Belief Propagation involves sending Gaussian distributions over the Manifold variables. Currently I'm using a wrapper Gaussian class but it could be more widely useful to have covariance / precision matrix as an attribute of the Manifold class?

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    enhancement CLA Signed 
    opened by joeaortiz 5
  • Override `Vector` operators in `Point2` and `Point3` with the correct return type

    Override `Vector` operators in `Point2` and `Point3` with the correct return type

    🚀 Feature

    Something like

    class Point2(Vector):
        ...
        
        def __add__(self, other: Vector) -> "Point2":
            return cast(Point2, super().__add__(other))
    

    Motivation

    Eliminates unnecessary casting when using typing.

    Alternatives

    There might be some way for mypy to do the correct thing that wouldn't require overriding these methods.

    Additional context

    enhancement good first issue 
    opened by luisenp 5
  • Add differentiable forward kinematics

    Add differentiable forward kinematics

    Motivation and Context

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    enhancement CLA Signed 
    opened by fantaosha 0
  • CUDA kernel for differentiable sparse matrix vector product

    CUDA kernel for differentiable sparse matrix vector product

    Backward pass can be made more efficient in GPU if we write a custom CUDA kernel for it, but this should be reasonable enough for now.

    _Originally posted by @luisenp in https://github.com/facebookresearch/theseus/pull/392

    opened by mhmukadam 0
  • Add robot model

    Add robot model

    Motivation and Context

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    CLA Signed 
    opened by fantaosha 0
  • Add prismatic joint

    Add prismatic joint

    Motivation and Context

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    enhancement CLA Signed 
    opened by fantaosha 0
  • Add revolute joint

    Add revolute joint

    Motivation and Context

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    enhancement CLA Signed 
    opened by fantaosha 0
  • Add se3.log()

    Add se3.log()

    Motivation and Context

    How Has This Been Tested

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [ ] My code follows the code style of this project.
    • [ ] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have completed my CLA (see CONTRIBUTING)
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    enhancement CLA Signed 
    opened by fantaosha 0
Releases(0.1.3)
  • 0.1.3(Nov 9, 2022)

    Major Updates

    • Adaptive damping for Levenberg-Marquardt by @luisenp in https://github.com/facebookresearch/theseus/pull/328
    • Moved all unit tests to a separate folder by @luisenp in https://github.com/facebookresearch/theseus/pull/352

    Other Changes

    • Removed manual cmake install for CPU tests. by @luisenp in https://github.com/facebookresearch/theseus/pull/338
    • Fixed vmap related bug breaking homography ex. with sparse solvers. by @luisenp in https://github.com/facebookresearch/theseus/pull/337
    • Small vectorization improvements by @luisenp in https://github.com/facebookresearch/theseus/pull/336
    • Change CI to separately handle torch >= 1.13 by @luisenp in https://github.com/facebookresearch/theseus/pull/345
    • Fixed quaternion bug at pi by @fantaosha in https://github.com/facebookresearch/theseus/pull/344
    • Expose Lie Groups checks at root level by @luisenp in https://github.com/facebookresearch/theseus/pull/335
    • Added option for making LieGroup checks silent. by @luisenp in https://github.com/facebookresearch/theseus/pull/351
    • Added a few other CUDA versions to build script. by @luisenp in https://github.com/facebookresearch/theseus/pull/349
    • Set vectorization off by default when using optimizers w/o TheseusLayer by @luisenp in https://github.com/facebookresearch/theseus/pull/350
    • Some more cleanup before 0.1.3 by @luisenp in https://github.com/facebookresearch/theseus/pull/353
    • Add tests for wheels in CI by @luisenp in https://github.com/facebookresearch/theseus/pull/354
    • #355 Add device parameter to UrdfRobotModel by @thomasweng15 in https://github.com/facebookresearch/theseus/pull/356
    • Added th.device to represent both str and torch.device. by @luisenp in https://github.com/facebookresearch/theseus/pull/357
    • Update to 0.1.3 by @luisenp in https://github.com/facebookresearch/theseus/pull/358

    New Contributors

    • @thomasweng15 made their first contribution in https://github.com/facebookresearch/theseus/pull/356

    Full Changelog: https://github.com/facebookresearch/theseus/compare/0.1.2...0.1.3

    Source code(tar.gz)
    Source code(zip)
  • 0.1.2(Oct 20, 2022)

    Major Updates

    • Add support for BaSpaCho sparse solver by @maurimo in https://github.com/facebookresearch/theseus/pull/324
    • Set vmap as the default autograd mode for autodiff cost function. by @luisenp in https://github.com/facebookresearch/theseus/pull/313

    Other Changes

    • Changed homography example to allow benchmarking only cost computation by @luisenp in https://github.com/facebookresearch/theseus/pull/311
    • Run isort on all files by @luisenp in https://github.com/facebookresearch/theseus/pull/312
    • Added usort:skip tags. by @luisenp in https://github.com/facebookresearch/theseus/pull/314
    • Fixed checkout tag syntax in build script. by @luisenp in https://github.com/facebookresearch/theseus/pull/315
    • Removed redundant directory in homography gif save. by @luisenp in https://github.com/facebookresearch/theseus/pull/316
    • Fixing simple example by @Gralerfics in https://github.com/facebookresearch/theseus/pull/320
    • AutodiffCostFunction now expands tensors with batch size 1 before running vmap by @luisenp in https://github.com/facebookresearch/theseus/pull/327
    • Deprecated FULL backward mode (now UNROLL). by @luisenp in https://github.com/facebookresearch/theseus/pull/332
    • Using better names for CHOLMOD solver python files. by @luisenp in https://github.com/facebookresearch/theseus/pull/333

    New Contributors

    • @Gralerfics made their first contribution in https://github.com/facebookresearch/theseus/pull/320

    Full Changelog: https://github.com/facebookresearch/theseus/compare/0.1.1...0.1.2

    Source code(tar.gz)
    Source code(zip)
  • 0.1.1(Sep 28, 2022)

    Highlights

    • Added pip install theseus-ai instructions. by @luisenp in https://github.com/facebookresearch/theseus/pull/276
    • Add functorch support for AutoDiffCostFunction by @fantaosha in https://github.com/facebookresearch/theseus/pull/268
    • Profile AutoDiffCostFunction and refactor the homography example by @fantaosha in https://github.com/facebookresearch/theseus/pull/296

    What's Changed

    • update pose graph data link by @fantaosha in https://github.com/facebookresearch/theseus/pull/256
    • [homography] Use kornia lib properly for perspective transform by @ddetone in https://github.com/facebookresearch/theseus/pull/258
    • Small Update to 00_introduction.ipynb by @NeilPandya in https://github.com/facebookresearch/theseus/pull/259
    • Changed SDF constructor to accept more convenient data types. by @luisenp in https://github.com/facebookresearch/theseus/pull/260
    • Fixed small bugs in MotionPlanner class by @luisenp in https://github.com/facebookresearch/theseus/pull/261
    • Added option to visualize SDF to trajectory visualization function by @luisenp in https://github.com/facebookresearch/theseus/pull/263
    • update readme by @mhmukadam in https://github.com/facebookresearch/theseus/pull/264
    • Added MotionPlanner.forward() method. by @luisenp in https://github.com/facebookresearch/theseus/pull/267
    • Small bug fixes and tweaks to generate_trajectory_figs. by @luisenp in https://github.com/facebookresearch/theseus/pull/271
    • Added a script for building wheels on a new docker image. by @luisenp in https://github.com/facebookresearch/theseus/pull/257
    • Bugfix: homography estimation - create data folder before downloading data by @luizgh in https://github.com/facebookresearch/theseus/pull/275
    • Added pip install theseus-ai instructions. by @luisenp in https://github.com/facebookresearch/theseus/pull/276
    • Refactored MotionPlanner so that objective can be passed separately. by @luisenp in https://github.com/facebookresearch/theseus/pull/277
    • add numel() to Manifold and Lie groups by @fantaosha in https://github.com/facebookresearch/theseus/pull/280
    • Add support for SE2 poses in Collision2D by @luisenp in https://github.com/facebookresearch/theseus/pull/278
    • Probabilistically correct SO(3) sampling by @brentyi in https://github.com/facebookresearch/theseus/pull/286
    • Refactor SO3 and SE3 to be consistent with functorch by @fantaosha in https://github.com/facebookresearch/theseus/pull/266
    • Add SE2 support in MotionPlanner by @luisenp in https://github.com/facebookresearch/theseus/pull/282
    • Fixed bug in visualization of SE2 motion plans. by @luisenp in https://github.com/facebookresearch/theseus/pull/293
    • Add functorch support for AutoDiffCostFunction by @fantaosha in https://github.com/facebookresearch/theseus/pull/268
    • Changed requirements so that main.txt only includes essential dependencies by @luisenp in https://github.com/facebookresearch/theseus/pull/294
    • Add to_quaternion, rotation, translation and convention comment by @fantaosha in https://github.com/facebookresearch/theseus/pull/295
    • Added th.as_variable() function to simplify creating new variables. by @luisenp in https://github.com/facebookresearch/theseus/pull/299
    • Added an optional end-of-step callback to NonlinearOptimizer.optimize(). by @luisenp in https://github.com/facebookresearch/theseus/pull/297
    • Add AutogradMode to AutoDiffCostFunction by @fantaosha in https://github.com/facebookresearch/theseus/pull/300
    • Profile AutoDiffCostFunction and refactor the homography example by @fantaosha in https://github.com/facebookresearch/theseus/pull/296
    • Changed unit tests so that the batch sizes to tests are defined in a central import by @luisenp in https://github.com/facebookresearch/theseus/pull/298
    • enhance the efficiency of Objectve.add() by @Christopher6488 in https://github.com/facebookresearch/theseus/pull/303
    • Added missing end newlines by @luisenp in https://github.com/facebookresearch/theseus/pull/307
    • Rename BackwardMode.FULL --> UNROLL and simplify backward mode config by @luisenp in https://github.com/facebookresearch/theseus/pull/305
    • Simplified autograd mode specification. by @luisenp in https://github.com/facebookresearch/theseus/pull/306
    • Clean up test_theseus_layer by @luisenp in https://github.com/facebookresearch/theseus/pull/308
    • update readme and bump version by @mhmukadam in https://github.com/facebookresearch/theseus/pull/309

    New Contributors

    • @NeilPandya made their first contribution in https://github.com/facebookresearch/theseus/pull/259
    • @luizgh made their first contribution in https://github.com/facebookresearch/theseus/pull/275
    • @brentyi made their first contribution in https://github.com/facebookresearch/theseus/pull/286
    • @Christopher6488 made their first contribution in https://github.com/facebookresearch/theseus/pull/303

    Full Changelog: https://github.com/facebookresearch/theseus/compare/0.1.0...0.1.1

    Source code(tar.gz)
    Source code(zip)
  • 0.1.0(Jul 20, 2022)

    What's Changed

    • Add SO3 support by @luisenp in https://github.com/facebookresearch/theseus/pull/46
    • Taoshaf.add so3 class by @fantaosha in https://github.com/facebookresearch/theseus/pull/65
    • Add SO3 rotate and unrotate by @fantaosha in https://github.com/facebookresearch/theseus/pull/57
    • add so3._hat_matrix_check() by @fantaosha in https://github.com/facebookresearch/theseus/pull/59
    • Encapsulated data loading functions in tactile pushing example into a new class by @luisenp in https://github.com/facebookresearch/theseus/pull/51
    • Added a TactilePoseEstimator class to easily create TheseusLayer for tactile pushing by @luisenp in https://github.com/facebookresearch/theseus/pull/52
    • Refactor tactile pushing model interface by @luisenp in https://github.com/facebookresearch/theseus/pull/55
    • Minor fixes 02/01/2022 by @luisenp in https://github.com/facebookresearch/theseus/pull/66
    • add adjoint, hat and vee for SE3 by @fantaosha in https://github.com/facebookresearch/theseus/pull/68
    • Add SE3.exp_map() and SE3.log_map() by @fantaosha in https://github.com/facebookresearch/theseus/pull/71
    • Add SE3.compose() by @fantaosha in https://github.com/facebookresearch/theseus/pull/72
    • add SE3.transform_from and SE3.transform_to by @fantaosha in https://github.com/facebookresearch/theseus/pull/80
    • Updated to CircleCI's next-gen images. by @luisenp in https://github.com/facebookresearch/theseus/pull/89
    • Updated README with libsuitesparse installation instructions. by @luisenp in https://github.com/facebookresearch/theseus/pull/90
    • Added kwarg to NonlinearOptimizer.optimizer() for tracking error history by @luisenp in https://github.com/facebookresearch/theseus/pull/82
    • Merge infos results for truncated backward modes by @luisenp in https://github.com/facebookresearch/theseus/pull/83
    • Fix Issue 88 by @maurimo in https://github.com/facebookresearch/theseus/pull/97
    • Fixed bug in Variable.update() that was breaking torch graph... by @luisenp in https://github.com/facebookresearch/theseus/pull/96
    • Forced Gauss-Newton step for last iterations of truncated backward. by @luisenp in https://github.com/facebookresearch/theseus/pull/81
    • Add automatic differentiation on the Lie group tangent space by @fantaosha in https://github.com/facebookresearch/theseus/pull/74
    • Add rand() to LieGroup by @fantaosha in https://github.com/facebookresearch/theseus/pull/95
    • Fix SO2 rotate and unrotate jacobian by @fantaosha in https://github.com/facebookresearch/theseus/pull/58
    • Add projection for sparse Jacobian matrices by @fantaosha in https://github.com/facebookresearch/theseus/pull/98
    • Add LieGroup Support for AutoDiffFunction by @fantaosha in https://github.com/facebookresearch/theseus/pull/99
    • Add SE2.transform from() and fix shape bugs in SE2.transform_to() by @fantaosha in https://github.com/facebookresearch/theseus/pull/103
    • Switch SE3.transform_from and SE3.transform_to by @fantaosha in https://github.com/facebookresearch/theseus/pull/104
    • Enabled back ellipsoidal damping in LM with linear solvers support checks by @luisenp in https://github.com/facebookresearch/theseus/pull/87
    • Changed name of pytest mark for CUDA extensions. by @luisenp in https://github.com/facebookresearch/theseus/pull/102
    • Fix backpropagation bugs in SO3 and SE3 log_map by @fantaosha in https://github.com/facebookresearch/theseus/pull/109
    • Add analytical jacobians for LieGroup.exp_map by @fantaosha in https://github.com/facebookresearch/theseus/pull/110
    • error and errorsquaredNorm optional data by @jeffin07 in https://github.com/facebookresearch/theseus/pull/105
    • Add analytical derivatives for LieGroup.log_map() by @fantaosha in https://github.com/facebookresearch/theseus/pull/114
    • Refactor SO3.to_quaternion to fix backward bugs and improve the accuracy around pi by @fantaosha in https://github.com/facebookresearch/theseus/pull/116
    • Change RobotModel.forward_kinematics() interface by @luisenp in https://github.com/facebookresearch/theseus/pull/94
    • Added error handling for missing matplotlib and omegaconf installation when using theg by @luisenp in https://github.com/facebookresearch/theseus/pull/119
    • Add jacobians argument to exp_map and log_map functions by @joeaortiz in https://github.com/facebookresearch/theseus/pull/122
    • Add pose graph optimization example by @fantaosha in https://github.com/facebookresearch/theseus/pull/118
    • Added 'secret' option to keep constant step size when using truncated… by @luisenp in https://github.com/facebookresearch/theseus/pull/130
    • Add jacobians for LieGroup.local() by @fantaosha in https://github.com/facebookresearch/theseus/pull/129
    • Added method to update SE2 from x_y_theta. by @luisenp in https://github.com/facebookresearch/theseus/pull/131
    • A complete version of Bundle Adjusment by @luisenp in https://github.com/facebookresearch/theseus/pull/117
    • Fixed jacobians for Between and VariableDifference by @fantaosha in https://github.com/facebookresearch/theseus/pull/133
    • Added batching support to tactile pushing example. by @luisenp in https://github.com/facebookresearch/theseus/pull/132
    • Changed tactile pose example optim var initialization to use start pose for all vars by @luisenp in https://github.com/facebookresearch/theseus/pull/137
    • Override Vector operators in Point2 and Point3 by @jeffin07 in https://github.com/facebookresearch/theseus/pull/124
    • Merge Between and VariableDiff with RelativePoseError and PosePriorError by @fantaosha in https://github.com/facebookresearch/theseus/pull/136
    • Fix a bug in Objective.copy() by @fantaosha in https://github.com/facebookresearch/theseus/pull/139
    • Added option to force max iterations for TactilePoseEstimator. by @luisenp in https://github.com/facebookresearch/theseus/pull/141
    • black version bump by @jeffin07 in https://github.com/facebookresearch/theseus/pull/144
    • Added code to split tactile pushing trajectories data into train/val by @luisenp in https://github.com/facebookresearch/theseus/pull/143
    • Removed RobotModel.dim() by @luisenp in https://github.com/facebookresearch/theseus/pull/156
    • [bug-fix] Fixed wrong data shape initialization for GPCostWeight.dt by @luisenp in https://github.com/facebookresearch/theseus/pull/157
    • Made AutoDiffCostFunction._tmp_optim_vars copies of original by @luisenp in https://github.com/facebookresearch/theseus/pull/155
    • Add forward kinematics using an URDF to theseus.embodied.kinematics. by @exhaustin in https://github.com/facebookresearch/theseus/pull/84
    • Fixed dtype error in se3.py that came up in unit tests by @joeaortiz in https://github.com/facebookresearch/theseus/pull/158
    • Add-ons for backward experiments on Tactile Pose Estimation by @luisenp in https://github.com/facebookresearch/theseus/pull/164
    • Change unit tests to avoid making mypy a main requirement. by @luisenp in https://github.com/facebookresearch/theseus/pull/168
    • Update readme and contrib by @luisenp in https://github.com/facebookresearch/theseus/pull/169
    • Add ManifoldGaussian class for messages in belief propagation by @joeaortiz in https://github.com/facebookresearch/theseus/pull/121
    • More efficient implementation of forward kinematics by @exhaustin in https://github.com/facebookresearch/theseus/pull/175
    • Changing setup virtualenv command. by @luisenp in https://github.com/facebookresearch/theseus/pull/178
    • Updated SDF object in collision cost functions whenever an aux var is updated by @luisenp in https://github.com/facebookresearch/theseus/pull/177
    • Fixed device bug that occurred when merging info in TRUNCATED backward modes by @luisenp in https://github.com/facebookresearch/theseus/pull/181
    • Allow constant inputs to cost functions to be passed as floats by @jeffin07 in https://github.com/facebookresearch/theseus/pull/150
    • Minor changes to core test code. by @luisenp in https://github.com/facebookresearch/theseus/pull/197
    • adding logo by @mhmukadam in https://github.com/facebookresearch/theseus/pull/200
    • Added aliases for Difference and Between by @luisenp in https://github.com/facebookresearch/theseus/pull/199
    • Fixed infinite recursion in GPMotionModel.copy() by @luisenp in https://github.com/facebookresearch/theseus/pull/201
    • Fix bug in diagonal cost weight by @luisenp in https://github.com/facebookresearch/theseus/pull/203
    • Added a check in TheseusFunction that enforces copy() also copies the variables by @luisenp in https://github.com/facebookresearch/theseus/pull/202
    • Fixed bug in Objective.error() that was updating data unncessarily. by @luisenp in https://github.com/facebookresearch/theseus/pull/204
    • DLM gradients by @rtqichen in https://github.com/facebookresearch/theseus/pull/161
    • Setup now uses torch for checking CUDA availability, and CI runs py3.9 tests by @luisenp in https://github.com/facebookresearch/theseus/pull/206
    • Update README to specify python versions and CUDA during install step + by @cpaxton in https://github.com/facebookresearch/theseus/pull/207
    • Vectorization refactor by @luisenp in https://github.com/facebookresearch/theseus/pull/205
    • Implement Robust Cost Function by @luisenp in https://github.com/facebookresearch/theseus/pull/148
    • Added option for auto resetting LUCudaSparseSolver if the batch size needs to change by @luisenp in https://github.com/facebookresearch/theseus/pull/212
    • Address comments on RobustCostFunction implementation by @luisenp in https://github.com/facebookresearch/theseus/pull/213
    • Moved the method that retracts all variables with a given delta to Objective by @luisenp in https://github.com/facebookresearch/theseus/pull/214
    • Fixed flaky unit test for Collision2D jacobians. by @luisenp in https://github.com/facebookresearch/theseus/pull/216
    • Bundle Adjustment using RobustCostFunction by @luisenp in https://github.com/facebookresearch/theseus/pull/149
    • Fixed the cross product bug for SE3.exp_map and SE3.log_map by @fantaosha in https://github.com/facebookresearch/theseus/pull/217
    • Vectorize optimization variables retraction step by @luisenp in https://github.com/facebookresearch/theseus/pull/215
    • Moved Vectorize(objective) to the Optimizer class. by @luisenp in https://github.com/facebookresearch/theseus/pull/218
    • Added on-demand vectorization and also vectorized Objective.error(). by @luisenp in https://github.com/facebookresearch/theseus/pull/221
    • Vectorize PGO by @luisenp in https://github.com/facebookresearch/theseus/pull/211
    • Destroy cusolver context in CusolverLUSolver destructor by @luisenp in https://github.com/facebookresearch/theseus/pull/222
    • Jacobian computation using loop by @fantaosha in https://github.com/facebookresearch/theseus/pull/225
    • Vectorization handles singleton costs by @luisenp in https://github.com/facebookresearch/theseus/pull/226
    • Added close-formed jacobian for DLM perturbation cost. by @luisenp in https://github.com/facebookresearch/theseus/pull/224
    • SE2/SE3/SO3 - consolidate EPS, add dtype-conditioned EPS and add float32 unit tests by @luisenp in https://github.com/facebookresearch/theseus/pull/220
    • Add normalization to Lie group by @fantaosha in https://github.com/facebookresearch/theseus/pull/227
    • Add normalization to Lie group constructor by @fantaosha in https://github.com/facebookresearch/theseus/pull/228
    • Benchmarking PGO on main branch by @fantaosha in https://github.com/facebookresearch/theseus/pull/233
    • Renamed Variable.data as Variable.tensor by @luisenp in https://github.com/facebookresearch/theseus/pull/229
    • More data -> tensor renaming by @luisenp in https://github.com/facebookresearch/theseus/pull/230
    • Added state history tracking by @luisenp in https://github.com/facebookresearch/theseus/pull/234
    • Unified all cost functions so that cost weight is the last non-default argument by @luisenp in https://github.com/facebookresearch/theseus/pull/235
    • Ensure that CHOLMOD python interface casts to 64-bit precision by @luisenp in https://github.com/facebookresearch/theseus/pull/238
    • Add sphinx and readthedocs configuration by @luisenp in https://github.com/facebookresearch/theseus/pull/237
    • Fixed bug in state history for matrix data tensors. by @luisenp in https://github.com/facebookresearch/theseus/pull/240
    • Added isort for examples folder by @luisenp in https://github.com/facebookresearch/theseus/pull/243
    • Avoid batching SDF data, since it's shared by all trajectories. by @luisenp in https://github.com/facebookresearch/theseus/pull/246
    • Fixed device bug in DLM perturbation's jacobians by @luisenp in https://github.com/facebookresearch/theseus/pull/247
    • Benchmark PGO on the main branch by @fantaosha in https://github.com/facebookresearch/theseus/pull/244
    • Added checks to enforce 32- or 64-bit dtype by @luisenp in https://github.com/facebookresearch/theseus/pull/245
    • Update readme by @mhmukadam in https://github.com/facebookresearch/theseus/pull/251
    • Added MANIFEST.in and changed project name to theseus-ai. by @luisenp in https://github.com/facebookresearch/theseus/pull/252
    • Adding simple example by @mhmukadam in https://github.com/facebookresearch/theseus/pull/253
    • Added option to clear cuda cache when vectorization cache is cleared. by @luisenp in https://github.com/facebookresearch/theseus/pull/249
    • Added evaluation directory by @luisenp in https://github.com/facebookresearch/theseus/pull/241

    New Contributors

    • @jeffin07 made their first contribution in https://github.com/facebookresearch/theseus/pull/105
    • @joeaortiz made their first contribution in https://github.com/facebookresearch/theseus/pull/122
    • @exhaustin made their first contribution in https://github.com/facebookresearch/theseus/pull/84
    • @rtqichen made their first contribution in https://github.com/facebookresearch/theseus/pull/161
    • @cpaxton made their first contribution in https://github.com/facebookresearch/theseus/pull/207

    Full Changelog: https://github.com/facebookresearch/theseus/compare/0.1.0-b.2...0.1.0

    Source code(tar.gz)
    Source code(zip)
  • 0.1.0-b.2(Feb 1, 2022)

    Major Additions

    • Initial implicit/truncated backward modes by @bamos in https://github.com/facebookresearch/theseus/pull/29
    • Adds support for energy based learning with NLL loss (LEO) by @psodhi in https://github.com/facebookresearch/theseus/pull/30
    • cusolver based batched LU solver by @maurimo in https://github.com/facebookresearch/theseus/pull/22
    • CUDA batch matrix multiplication and ops by @maurimo in https://github.com/facebookresearch/theseus/pull/23
    • CUDA-based solver class and autograd function by @maurimo in https://github.com/facebookresearch/theseus/pull/24

    What Else Changed

    • Added clearer explanation at the end of Tutorial 0 and fixed doc typos by @luisenp in https://github.com/facebookresearch/theseus/pull/2
    • Default SE2/SO2 is zero element rather than torch empty. by @luisenp in https://github.com/facebookresearch/theseus/pull/3
    • Add plots to tutorials by @bamos in https://github.com/facebookresearch/theseus/pull/25
    • update text in Tutorial 2 per issue #27 by @vshobha in https://github.com/facebookresearch/theseus/pull/31
    • Update contrib and add gitattributes by @mhmukadam in https://github.com/facebookresearch/theseus/pull/33
    • update continuous integration by @maurimo in https://github.com/facebookresearch/theseus/pull/21
    • Changed TheseusLayer.forward() to receive optimizer_kwargs as a single dict by @luisenp in https://github.com/facebookresearch/theseus/pull/45
    • [hotfix] fix lint issues by @maurimo in https://github.com/facebookresearch/theseus/pull/54
    • Update version by @mhmukadam in https://github.com/facebookresearch/theseus/pull/63

    New Contributors

    • @luisenp made their first contribution in https://github.com/facebookresearch/theseus/pull/2
    • @bamos made their first contribution in https://github.com/facebookresearch/theseus/pull/25
    • @vshobha made their first contribution in https://github.com/facebookresearch/theseus/pull/31
    • @maurimo made their first contribution in https://github.com/facebookresearch/theseus/pull/21
    • @mhmukadam made their first contribution in https://github.com/facebookresearch/theseus/pull/33
    • @psodhi made their first contribution in https://github.com/facebookresearch/theseus/pull/30

    Full Changelog: https://github.com/facebookresearch/theseus/compare/0.1.0-b.1...0.1.0-b.2

    Source code(tar.gz)
    Source code(zip)
  • 0.1.0-b.1(Dec 3, 2021)

    Initial beta release.

    • Core data structures.
    • Vector and 2D rigid body representations.
    • Gauss-Newton and LM nonlinear optimizers.
    • LU and Cholesky dense linear solvers.
    • Cholmod sparse linear solver (CPU only).
    • Cost functions for planar motion planning and tactile estimation in planar pushing.
    Source code(tar.gz)
    Source code(zip)
Owner
Meta Research
Meta Research
Learning nonlinear operators via DeepONet

DeepONet: Learning nonlinear operators The source code for the paper Learning nonlinear operators via DeepONet based on the universal approximation th

Lu Lu 239 Jan 2, 2023
DeepLM: Large-scale Nonlinear Least Squares on Deep Learning Frameworks using Stochastic Domain Decomposition (CVPR 2021)

DeepLM DeepLM: Large-scale Nonlinear Least Squares on Deep Learning Frameworks using Stochastic Domain Decomposition (CVPR 2021) Run Please install th

Jingwei Huang 130 Dec 2, 2022
A python implementation of Physics-informed Spline Learning for nonlinear dynamics discovery

PiSL A python implementation of Physics-informed Spline Learning for nonlinear dynamics discovery. Sun, F., Liu, Y. and Sun, H., 2021. Physics-informe

Fangzheng (Andy) Sun 8 Jul 13, 2022
Differentiable Factor Graph Optimization for Learning Smoothers @ IROS 2021

Differentiable Factor Graph Optimization for Learning Smoothers Overview Status Setup Datasets Training Evaluation Acknowledgements Overview Code rele

Brent Yi 60 Nov 14, 2022
OptNet: Differentiable Optimization as a Layer in Neural Networks

OptNet: Differentiable Optimization as a Layer in Neural Networks This repository is by Brandon Amos and J. Zico Kolter and contains the PyTorch sourc

CMU Locus Lab 428 Dec 24, 2022
Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization Algorithm,Immune Algorithm, Artificial Fish Swarm Algorithm, Differential Evolution and TSP(Traveling salesman)

scikit-opt Swarm Intelligence in Python (Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Algorithm, Immune Algorithm,A

郭飞 3.7k Jan 3, 2023
Racing line optimization algorithm in python that uses Particle Swarm Optimization.

Racing Line Optimization with PSO This repository contains a racing line optimization algorithm in python that uses Particle Swarm Optimization. Requi

Parsa Dahesh 6 Dec 14, 2022
Applications using the GTN library and code to reproduce experiments in "Differentiable Weighted Finite-State Transducers"

gtn_applications An applications library using GTN. Current examples include: Offline handwriting recognition Automatic speech recognition Installing

Facebook Research 68 Dec 29, 2022
xitorch: differentiable scientific computing library

xitorch is a PyTorch-based library of differentiable functions and functionals that can be widely used in scientific computing applications as well as deep learning.

null 24 Apr 15, 2021
Open Source Differentiable Computer Vision Library for PyTorch

Kornia is a differentiable computer vision library for PyTorch. It consists of a set of routines and differentiable modules to solve generic computer

kornia 7.6k Jan 4, 2023
A Python library for differentiable optimal control on accelerators.

A Python library for differentiable optimal control on accelerators.

Google 80 Dec 21, 2022
Differentiable scientific computing library

xitorch: differentiable scientific computing library xitorch is a PyTorch-based library of differentiable functions and functionals that can be widely

null 98 Dec 26, 2022
Performant, differentiable reinforcement learning

deluca Performant, differentiable reinforcement learning Notes This is pre-alpha software and is undergoing a number of core changes. Updates to follo

Google 114 Dec 27, 2022
Differentiable molecular simulation of proteins with a coarse-grained potential

Differentiable molecular simulation of proteins with a coarse-grained potential This repository contains the learned potential, simulation scripts and

UCL Bioinformatics Group 44 Dec 10, 2022
UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering

UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering This repository holds all the code and data for our recent work on

Mohamed El Banani 118 Dec 6, 2022
Code for the ECCV2020 paper "A Differentiable Recurrent Surface for Asynchronous Event-Based Data"

A Differentiable Recurrent Surface for Asynchronous Event-Based Data Code for the ECCV2020 paper "A Differentiable Recurrent Surface for Asynchronous

Marco Cannici 21 Oct 5, 2022
Differentiable Optimizers with Perturbations in Pytorch

Differentiable Optimizers with Perturbations in PyTorch This contains a PyTorch implementation of Differentiable Optimizers with Perturbations in Tens

Jake Tuero 54 Jun 22, 2022
Official PyTorch Code of GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection (CVPR 2021)

GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Mo

Abhinav Kumar 76 Jan 2, 2023
Repository for the paper "PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation", CVPR 2021.

PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation Code repository for the paper: PoseAug: A Differentiable Pose Augme

Pyjcsx 328 Dec 17, 2022