Square Root Bundle Adjustment for Large-Scale Reconstruction

Related tags

Deep Learning rootba
Overview

RootBA: Square Root Bundle Adjustment

Project Page | Paper | Poster | Video | Code

teaser image

Table of Contents

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{demmel2021rootba,
 author = {Nikolaus Demmel and Christiane Sommer and Daniel Cremers and Vladyslav Usenko},
 title = {Square Root Bundle Adjustment for Large-Scale Reconstruction},
 booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
 year = {2021}
}

Note: The initial public release in this repository corresponds to the code version evluated in the CVPR'21 paper, after refactoring and cleanup. Except for minor numerical differences, the results should be reproducible on comparable hardware. As the code evolves, runtime differences might become larger.

Dependencies

The following describes the needed dependencies in general, followed by concrete instructions to install them on Linux or macOS.

Toolchain

  • C++17 compiler
  • CMake 3.13 or newer

Included as submodule or copy

See the external folder and the scripts/build-external.sh script.

The following libraries are submodules:

Some external libraries have their source copied directly as part of this repository, see the external/download_copied_sources.sh script:

Externally supplied

The following dependencies are expected to be supplied externally, e.g. from a system-wide install:

  • TBB

    Note: You can control the location where TBB is found by setting the environment variable TBB_ROOT, e.g. export TBB_ROOT=/opt/intel/tbb.

  • glog

  • BLAS and LAPACK routines are needed by SuiteSparse, and optionally used by Eigen and Ceres directly for some operations.

    On UNIX OSes other than macOS we recommend ATLAS, which includes BLAS and LAPACK routines. It is also possible to use OpenBLAS. However, one needs to be careful to turn off the threading inside OpenBLAS as it conflicts with use of threads in RootBA and also Ceres. For example, export OPENBLAS_NUM_THREADS=1.

    MacOS ships with an optimized LAPACK and BLAS implementation as part of the Accelerate framework. The Ceres build system will automatically detect and use it.

Python

Python dependencies are needed for scripts and tools to generate config files, run experiments, plot results, etc. For generating result tables and plots you additionally need latexmk and a LaTeX distribution.

Developer Tools

These additional dependencies are useful if you plan to work on the code:

  • ccache helps to speed up re-compilation by caching the compilation results for unchanged translation units.
  • ninja is an alternative cmake generator that has better parallelization of your builds compared to standard make.
  • clang-format version >= 10 is used for formatting C++ code.
  • clang-tidy version >= 12 is used to style-check C++ code.
  • yapf is used for formatting Python code.

There are scripts to help apply formatting and style checks to all source code files:

  • scripts/clang-format-all.sh
  • scripts/clang-tidy-all.sh
  • scripts/yapf-all.sh

Installing dependencies on Linux

Ubuntu 20.04 and newer are supported.

Note: Ubuntu 18.04 should also work, but you need to additionally install GCC 9 from the Toolchain test builds PPA.

Toolchain and libraries

# for RootBA and Ceres
sudo apt install \
    libgoogle-glog-dev \
    libgflags-dev \
    libtbb-dev \
    libatlas-base-dev \
    libsuitesparse-dev
# for Pangolin GUI
sudo apt install \
    libglew-dev \
    ffmpeg \
    libavcodec-dev \
    libavutil-dev \
    libavformat-dev \
    libswscale-dev \
    libavdevice-dev \
    libjpeg-dev \
    libpng-dev \
    libtiff5-dev \
    libopenexr-dev

To get a recent version of cmake you can easily install it from pip.

sudo apt install python3-pip
python3 -m pip install --user -U cmake

# put this in your .bashrc to ensure cmake from pip is found
export PATH="$HOME/.local/bin:$PATH"

Note: If you run this in a plain Ubuntu docker container you might need to install some additional basic packages (which should already be installed on a desktop system):

sudo apt install git-core wget curl time software-properties-common

Python (optional)

Other python dependencies (for tools and scripts) can also be installed via pip.

python3 -m pip install --user -U py_ubjson matplotlib numpy munch scipy pylatex toml

For generating result tables and plots you additionally need latexmk and a LaTeX distribution.

sudo apt install texlive-latex-extra latexmk

Developer tools (optional)

For developer tools, you can install ninja and ccache from apt:

sudo apt install ccache ninja-build

You can install yapf from pip:

python3 -m pip install --user -U yapf

For clang-tidy you need at least version 12, so even on Ubuntu 20.04 you need to get it from the llvm website:

wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add -
sudo add-apt-repository "deb http://apt.llvm.org/$(lsb_release -sc)/ llvm-toolchain-$(lsb_release -sc)-12 main"
sudo apt update
sudo apt install clang-tidy-12

On Ubuntu 20.04 and above, clang-format from apt is recent enough (we require at least version 10):

sudo apt install clang-format

Note: on 18.04 you need to install clang-format version 10 or newer from the llvm website. For example, after following the instructions above for installing clang-tidy, you can install clang-format version 12:

sudo apt install clang-format-12

Installing depedencies on macOS

We support macOS 10.15 "Catalina" and newer.

Note: We have not yet tested this codebase on M1 macs.

Toolchain and libraries

Install Homebrew, then use it to install dependencies:

brew install cmake glog gflags tbb suitesparse
brew install glew ffmpeg libjpeg libpng libtiff

Python (optional)

Python dependencies (for tools and scripts) can be installed via pip after installing python 3 from homebrew.

brew install python
python3 -m pip install --user -U py_ubjson matplotlib numpy munch scipy pylatex toml

For generating result tables and plots you additionally need latexmk and a LaTeX distribution.

brew install --cask mactex

Developer tools (optional)

Developer tools can be installed with homebrew.

brew install ccache ninja clang-format clang-tidy yapf

Building

Build dependencies

./scripts/build-external.sh [BUILD_TYPE]

You can optionally pass the cmake BUILD_TYPE used to compile the third party libraries as the first argument. If you don't pass anything the deafult is Release. This build script will use ccache and ninja automaticaly if they are found on PATH.

Note: The build-external.sh build script will init, synchronize and update all submodules, so usually you don't have to worry about submodules. For example, you don't have to run git submodule update --recursive manually when the submodules were updated upstream, as long as you run the build-external.sh script. But there is a small caveat, should you ever want to update a submodule yourself (e.g. update Eigen to a new version). In that case you need to commit that change before running this script, else the script will revert the submodule back to the committed version.

Build RootBA option a)

Use the build script.

./scripts/build-rootba.sh [BUILD_TYPE]

You can optionally pass the cmake BUILD_TYPE used to compile RootBA as the first argument. If you don't pass anything the default is Release. The cmake build folder is build, inside the project root. This build script will use ccache and ninja automaticaly if they are found on PATH.

Build RootBA option b)

Manually build with the standard cmake workflow.

mkdir build && cd build
cmake ..
make -j8

The cmake project will automatically use ccache if it is found on PATH (unless you override by manually specifying CMAKE_C_COMPILER_LAUNCHER/CMAKE_CXX_COMPILER_LAUNCHER). To use ninja instead of make, you can use:

cmake .. -G Ninja
ninja

CMake Options

You can set the following options when calling cmake. For setting option OPTION to a value of VALUE, add the command line argument -DOPTION=VALUE to the cmake call above.

  • ROOTBA_DEVELOPER_MODE: Presets for convenience during development. If enabled, the binaries are not placed in the cmake's default location in the cmake build folder, but instead inside the source folder, in <PROJECT_ROOT>/bin. Turn off if you prefer to work directly in multiple build folders at the same time. Default: ON
  • ROOTBA_ENABLE_TESTING: Build unit tests. Default: ON
  • ROOTBA_INSTANTIATIONS_DOUBLE: Instantiate templates with Scalar = double. If disabled, running with config option use_double = true will cause a runtime error. But, disabling it may reduce compile times and memory consumption during compilation significantly. While developing, we recommend leaving only one of ROOTBA_INSTANTIONS_DOUBLE or ROOTBA_INSTANTIATIONS_FLOAT enabled, not both. Default: ON
  • ROOTBA_INSTANTIATIONS_FLOAT: Instantiate templates with Scalar = float. If disabled, running with config option use_double = false will cause a runtime error. But, disabling it may reduce compile times and memory consumption during compilation significantly. While developing, we recommend leaving only one of ROOTBA_INSTANTIONS_DOUBLE or ROOTBA_INSTANTIATIONS_FLOAT enabled, not both. Default: ON
  • ROOTBA_INSTANTIATIONS_STATIC_LMB: Instatiate statically sized specializations for small sized landmark blocks. If disabled, all sizes use the dymanically sized implementation, which depending on the problem, might have slightly higher runtime (maybe around 10%). But, disabling it might reduce compile times and memory consumption during compilation significantly. We recommend turning this off during development. Default: ON
  • BUILD_SHARED_LIBS: Build all rootba modules as shared libraries (see the cmake documentation). Default: ON

Running Unit Tests

Unit tests are implemented with the GoogleTest framework and can be run with CMake's ctest command after compilation.

cd build
ctest

BAL Problems

In the "Bundle Adjustment in the Large" (BAL) problem formulation cameras are represented as world-to-cam poses and points as 3D points in world frame, and each camera has its own set of independent intrinsics, using the "Snavely projection" function with one focal length f and two radial distortion parameters k1 and k2. This is implemented in the BalProblem class. Besides the BAL format, we also implement a reader for "bundle" files, but the internal representation is the same.

Note: In our code we follow the convention that the positive z-axis points forward in camera viewing direction. Both BAL and bundle files specify the projection function assuming the negative z-axis pointing in viewing direction. We convert to our convention when reading the datasets.

For testing and development, two example datasets from BAL are included in the data/rootba/bal folder:

data/rootba/bal/ladybug/problem-49-7776-pre.txt
data/rootba/bal/final/problem-93-61203-pre.txt

We moreover include a download-bal-problems.sh script to conveniently download the BAL datasets. See the batch evaluation tutorial below for more details.

Additionally, we provide a mirror of BAL and some additional publicly available datasets: https://gitlab.vision.in.tum.de/rootba/rootba_data

Please refer to the README files in the corresponding folders of that repository for further details on the data source, licensing and any preprocessing we applied. Large files in that repository are stored with Git LFS. Beware that the full download including LFS objects is around 15GB.

Note: The tutorial examples below assume that the data is found in a rootba_data folder parallel to the source folder, so if you decide to clone the data git repository, you can use (after installing and enabling Git LFS):

cd ..
git clone https://gitlab.vision.in.tum.de/rootba/rootba_data.git

Testing Bundle Adjustment

Visualization of BAL Problems

With a simple GUI application you can visualize the BAL problems, including 3D camera poses and landmark positions, as well as feature detections and landmark reprojections.

./bin/bal_gui --input data/rootba/bal/final/problem-93-61203-pre.txt

Plots

Running Bundle Adjustment

The main executable to run bundle adjustment is bal. This implements bundle adjustment in all evaluated variants and can be configured from the command line and/or a rootba_config.toml file.

There are also three additional variants, bal_qr, bal_sc and bal_ceres, which override the solver_type option accordingly. They can be useful during development, since they only link the corresponding modules and thus might have faster compile times.

For example, you can run the square root solver with default parameters on one of the included test datasets with:

./bin/bal --input data/rootba/bal/ladybug/problem-49-7776-pre.txt

This generates a ba_log.json file with per-iteration log information that can be evaluated and visualized.

Config Options

Options can be configured in a rootba_config.toml configuration file or from the command line, where the command line takes precedence.

The --help command line argument provides comprehensive documentation of available options and you can generate a config file with default values with:

./bin/bal --dump-config --config /dev/null > rootba_config.toml

For futher details and a discussion of the options corresponding to the evaluated solver variants from the CVPR'21 paper see Configuration.md.

Visualization of Results

The different variants of bundle adjustment all log their progress to a ba_log.json or ba_log.ubjson file. Some basic information can be displayed with the plot-logs.py script:

./scripts/plot-logs.py ba_log.json

You can also pass multiple files, or folders, which are searched for ba_log.json and ba_log.ubjson files. In the plots, the name of the containing folder is used as a label for each ba_log.json file.

Let's run a small example and compare solver performance:

mkdir -p ../rootba_testing/qr32/
mkdir -p ../rootba_testing/sc64/
./bin/bal -C ../rootba_testing/qr32/ --no-use-double --input ../../rootba/data/rootba/bal/ladybug/problem-49-7776-pre.txt
./bin/bal -C ../rootba_testing/sc64/ --solver-type SCHUR_COMPLEMENT --input ../../rootba/data/rootba/bal/ladybug/problem-49-7776-pre.txt
./scripts/plot-logs.py ../rootba_testing/

On this small example problem both solvers converge to the same cost and are similarly fast:

Plots

Batch Evaluation

For scripts to run systematic experiments and do more sophisticated analysis of the generated log files, please follow the Batch Evaluation Tutorial.

This also includes instructions to reproduce the results presented in the CVPR'21 paper.

PDF Preview

Repository Layout

The following gives a brief overview over the layout of top-level folders in this repository.

  • bin: default destination for compiled binaries
  • build: default cmake build folder
  • ci: utilities for CI such as scripts and docker files
  • cmake: cmake utilities and find modules; note in particular SetupDependencies.cmake, which sets up cmake targets for third-party libraries
  • data: sample datasets for testing
  • docs: documentation beyond the main README, including resources such as images
  • examples: example config files
  • external: third-party libraries included as submodules or copies; also build and install folders generated by the build-external.sh scripts
  • python: Python module for plotting and generating result tables from batch experiments.
  • scripts: various utility scripts for building, developing, running experiments and plotting results
  • src: this contains the implementation, including headers, source files, and unit tests.
  • test: additional tests

Code Layout

The main modules in the src folder are as follows.

Corresponding header and source files are found in the same folder with extension .hpp and .cpp. If there are corresponding unit tests they are found in the same folder with a .test.cpp file extension.

  • app: executables
  • rootba: libraries
    • bal: data structures for the optimization state; options; common utilities and logging
    • ceres: everything related to our implementation with Ceres
    • cg: custom CG implementation including data strcutures for pre-conditioners
    • cli: common utils for command line parsing and automatically registering options with the command line
    • options: generic options framework
    • pangolin: everything related to the GUI implementation
    • qr: custom QR solver main implementation details
    • sc: custom SC solver main implementation details
    • solver: custom Levenberg-Marquardt solver loop and interface to the QR and SC implementations
    • util: generic utilities

License

The code of the RootBA project is licensed under a BSD 3-Clause License.

Parts of the code are derived from Ceres Solver. Please also consider licenses of used third-party libraries. See ACKNOWLEDGEMENTS.

Comments
  • CANNOT add more cli paramters

    CANNOT add more cli paramters

    Hi, I've been trying to add another cli paramter to set groundtruth file for system benmarking. But it cannot work.

      VISITABLE_META(std::string, input, help("input dataset file to load"));
      VISITABLE_META(std::string, ground_truth, help("input ground_truth file to load"));
      VISITABLE_META(DatasetType, input_type,
                     init(DatasetType::AUTO).help("type of dataset to load"));
    

    It's mentioned that I have to add it to docstring, so I add it to configuration.md also. Still, it cannot work. After some debuging, it returned false in following code. But I don't understand why it failed.

      // parse arguments
      if (!parse(argc, argv, cli)) {
        auto executable_name = std::filesystem::path(argv[0]).filename();
        auto fmt = doc_formatting{}.doc_column(22);
        auto filter = param_filter{}.has_doc(tri::either);
        if (!application_summary.empty()) {
          std::cout << application_summary << "\n\n";
        }
    
        std::cout<<"ffffff\n";
        std::cout << "SYNOPSIS:\n"
                  << usage_lines(cli, executable_name) << "\n\n"
                  << "OPTIONS:\n"
                  << documentation(cli, fmt, filter) << '\n';
        return false;
      }
    
    

    So could you give me some hints about how to make it work?

    opened by BayRanger 7
  • Why inverse the coordinate to add pertubation

    Why inverse the coordinate to add pertubation

    Hi Nikolaus,

    In the code file bal_problem.cpp, there is a inversion transformation to add perturbation, so as to add noise in camera 2 world coordinate I think, My question is that is it necessary to do the transformation? What is the motivation to do this step?

    Bests

    Reference code

    if (rotation_sigma > 0 || translation_sigma > 0) {
        for (auto& cam : cameras_) {
          // perturb camera center in world coordinates
          if (translation_sigma > 0) {
            SE3 T_w_c = cam.T_c_w.inverse();
            T_w_c.translation() += perturbation<Scalar, 3>(translation_sigma, eng);
            cam.T_c_w = T_w_c.inverse();
          }
          // local rotation perturbation in camera frame
          if (rotation_sigma > 0) {
            cam.T_c_w.so3() =
                SO3::exp(perturbation<Scalar, 3>(rotation_sigma, eng)) *
                cam.T_c_w.so3();
          }
        }
      }
    
    opened by BayRanger 2
  • QR decomposition via Givens roation vs Householder transform?

    QR decomposition via Givens roation vs Householder transform?

    Hi,

    Thanks for your excellent work.

    In your works, you use the householder transform to perform QR decomposition. While in MSCKF related literatures, they usually use Givens rotation.

    After some search, I realize that they have the same level of accuracy (both are better than Schmidt orthogonalization). So why you use householder transform instead of Givens rotation. Is Householder transform really faster than Givens rotation?

    Best, Deshun

    opened by hitdshu 2
  • Some questions about the implementation and comparison to other solvers

    Some questions about the implementation and comparison to other solvers

    Hi,

    Thanks for your very excellent work. I have some questions about the current implementation and the performance.

    1. rootba/ceres use the jacobian squared sum to scale the problem data before solve them. But in g2o and srrg2_solver, they seem to not scale them at all. Are these necessary and to what extent are they necessary?

    2. It is understood that trust region(dogleg) type algorithm is more efficient than LM algorithm, as it only needs to factor the big sparse matrix only once in a iteration. However, almost all solvers use LM in their default settings. For ceres, the dogleg method is actually slower than LM in my experiments. Could you please give some reasons about this?

    3. In a recent paper, https://github.com/srrg-sapienza/srrg2_solver , the author there tests several solvers, and one can conclude that ceres is very unefficient from their experiments. With single thread, rootba might have similar performance to ceres and hence might be inefficient as well..... It would be great if you could provide some clues...

    Thanks again for your excellent works, like rootba/basalt, etc...

    Have a nice day, Deshun

    opened by hitdshu 2
  • Question

    Question

    Hi Nikolaus,

    Thanks for open-sourcing the code.

    Would you please explain how rootba method is different from DENSE_QR solver in Ceres? I presume this one avoids the computation of the Hessian and not scalable for large problems, similar to rootba.

    opened by melhashash 2
  • Modifying reprojection error

    Modifying reprojection error

    Hello, I would like to use your Bundle Adjustment solution with a different projection error based on another research in my field of study. The parameters of the camera pose do not change, only the projection mapping. Is there a way to add this kind of modification? Regards, Zachi

    opened by ShtainZ 1
  • Could you help me figure out why my examination result shows rootba is slow than ceres and schur complement

    Could you help me figure out why my examination result shows rootba is slow than ceres and schur complement

    I am reading your paper <Square Root Bundle Adjustment for Large-Scale Reconstruction, CVPR2021>. Your idea of using QR decomposition instead of traditional Schur Complement is awesome. I have run your source code rootba. The result image is shown in the end of the issue. From the picture, we can see QR-32(single precision QR in rootba) is slow than ceres and schur complement. I was puzzle about it. Could you help me figure out it?

    #!/usr/bin/env bash
    
    MY_EXAM_DATA_FOLDER="./rootba_testing_data_thread16"
    declare -a my_exames=("qr32" "qr64" "sc64" "sc32" "ceres")
    for i in "${my_exames[@]}"
    do
        mkdir -p $MY_EXAM_DATA_FOLDER/$i
    done
    
    DATA_ROOT_PATH=/home/shaoping/readcode/rootba/data
    ./bin/bal -C $MY_EXAM_DATA_FOLDER/qr32/ --num-threads 0 --no-debug --no-use-double --use-householder-marginalization --input "$DATA_ROOT_PATH/rootba/bal/ladybug/problem-49-7776-pre.txt"
    ./bin/bal -C $MY_EXAM_DATA_FOLDER/qr64/ --num-threads 0 --no-debug --use-double --use-householder-marginalization --input "$DATA_ROOT_PATH/rootba/bal/ladybug/problem-49-7776-pre.txt"
    ./bin/bal -C $MY_EXAM_DATA_FOLDER/sc64/ --num-threads 0 --no-debug --solver-type SCHUR_COMPLEMENT  --use-double  --input "$DATA_ROOT_PATH/rootba/bal/ladybug/problem-49-7776-pre.txt"
    ./bin/bal -C $MY_EXAM_DATA_FOLDER/sc32/ --num-threads 0 --no-debug --solver-type SCHUR_COMPLEMENT --no-use-double  --input "$DATA_ROOT_PATH/rootba/bal/ladybug/problem-49-7776-pre.txt"
    ./bin/bal -C $MY_EXAM_DATA_FOLDER/ceres/ --num-threads 0 --no-debug --solver-type CERES --use-double  --input "$DATA_ROOT_PATH/rootba/bal/ladybug/problem-49-7776-pre.txt"
    
    ./scripts/plot-logs.py $MY_EXAM_DATA_FOLDER
    

    图片

    opened by varyshare 8
Owner
Nikolaus Demmel
Nikolaus Demmel
Deep Unsupervised 3D SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment.

(ACMMM 2021 Oral) SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment This repository shows two tasks: Face landmark detection and Fac

BoomStar 51 Dec 13, 2022
Rotation-Only Bundle Adjustment

ROBA: Rotation-Only Bundle Adjustment Paper, Video, Poster, Presentation, Supplementary Material In this repository, we provide the implementation of

Seong 51 Nov 29, 2022
Poplar implementation of "Bundle Adjustment on a Graph Processor" (CVPR 2020)

Poplar Implementation of Bundle Adjustment using Gaussian Belief Propagation on Graphcore's IPU Implementation of CVPR 2020 paper: Bundle Adjustment o

Joe Ortiz 34 Dec 5, 2022
A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.

ManhattanSLAM Authors: Raza Yunus, Yanyan Li and Federico Tombari ManhattanSLAM is a real-time SLAM library for RGB-D cameras that computes the camera

null 117 Dec 28, 2022
"MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction" (CVPRW 2022) & (Winner of NTIRE 2022 Challenge on Spectral Reconstruction from RGB)

MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction (CVPRW 2022) Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Z

Yuanhao Cai 274 Jan 5, 2023
PyTorch implementation of the paper: Long-tail Learning via Logit Adjustment

logit-adj-pytorch PyTorch implementation of the paper: Long-tail Learning via Logit Adjustment This code implements the paper: Long-tail Learning via

Chamuditha Jayanga 53 Dec 23, 2022
This codebase is the official implementation of Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization (NeurIPS2021, Spotlight)

Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization This codebase is the official implementation of Test-Time Classifier A

null 47 Dec 28, 2022
A face dataset generator with out-of-focus blur detection and dynamic interval adjustment.

A face dataset generator with out-of-focus blur detection and dynamic interval adjustment.

Yutian Liu 2 Jan 29, 2022
Forecasting for knowable future events using Bayesian informative priors (forecasting with judgmental-adjustment).

What is judgyprophet? judgyprophet is a Bayesian forecasting algorithm based on Prophet, that enables forecasting while using information known by the

AstraZeneca 56 Oct 26, 2022
BARF: Bundle-Adjusting Neural Radiance Fields 🤮 (ICCV 2021 oral)

BARF ?? : Bundle-Adjusting Neural Radiance Fields Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey IEEE International Conference on Comp

Chen-Hsuan Lin 539 Dec 28, 2022
U-2-Net: U Square Net - Modified for paired image training of style transfer

U2-Net: U Square Net Modified for paired image training of style transfer This is an unofficial repo making use of the code which was made available b

Doron Adler 43 Oct 3, 2022
The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift

TwoStageAlign The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift Pa

Shi Guo 32 Dec 15, 2022
Implementation of fast algorithms for Maximum Spanning Tree (MST) parsing that includes fast ArcMax+Reweighting+Tarjan algorithm for single-root dependency parsing.

Fast MST Algorithm Implementation of fast algorithms for (Maximum Spanning Tree) MST parsing that includes fast ArcMax+Reweighting+Tarjan algorithm fo

Miloš Stanojević 11 Oct 14, 2022
A simple root calculater for python

Root A simple root calculater Usage/Examples > python3 root.py 9 3 4 # Order: number - grid - number of decimals # Output: 2.08

Reza Hosseinzadeh 5 Feb 10, 2022
Fast Differentiable Matrix Sqrt Root

Official Pytorch implementation of ICLR 22 paper Fast Differentiable Matrix Square Root

YueSong 42 Dec 30, 2022
null 571 Dec 25, 2022
Open-AI's DALL-E for large scale training in mesh-tensorflow.

DALL-E in Mesh-Tensorflow [WIP] Open-AI's DALL-E in Mesh-Tensorflow. If this is similarly efficient to GPT-Neo, this repo should be able to train mode

EleutherAI 432 Dec 16, 2022
Apache Spark - A unified analytics engine for large-scale data processing

Apache Spark Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Python, and R, and an op

The Apache Software Foundation 34.7k Jan 4, 2023
This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

null 212 Dec 25, 2022