graph-theoretic framework for robust pairwise data association

Overview

banner

CLIPPER: A Graph-Theoretic Framework for Robust Data Association

Data association is a fundamental problem in robotics and autonomy. CLIPPER provides a framework for robust, pairwise data association and is applicable in a wide variety of problems (e.g., point cloud registration, sensor calibration, place recognition, etc.). By leveraging the notion of geometric consistency, a graph is formed and the data association problem is reduced to the maximum clique problem. This NP-hard problem has been studied in many fields, including data association, and solutions techniques are either exact (and not scalable) or approximate (and potentially imprecise). CLIPPER relaxes this problem in a way that (1) allows guarantees to be made on the solution of the problem and (2) is applicable to weighted graphs, avoiding the loss of information due to binarization which is common in other data association work. These features allow CLIPPER to achieve high performance, even in the presence of extreme outliers.

This repo provides both MATLAB and C++ implementations of the CLIPPER framework. In addition, Python bindings, Python, C++, and MATLAB examples are included.

Citation

If you find this code useful in your research, please cite our paper:

  • P.C. Lusk, K. Fathian, and J.P. How, "CLIPPER: A Graph-Theoretic Framework for Robust Data Association," arXiv preprint arXiv:2011.10202, 2020. (pdf) (presentation)
@inproceedings{lusk2020clipper,
  title={CLIPPER: A Graph-Theoretic Framework for Robust Data Association},
  author={Lusk, Parker C and Fathian, Kaveh and How, Jonathan P},
  booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
  year={2021}
}

Getting Started

After cloning this repo, please build using cmake:

$ mkdir build
$ cd build
$ cmake ..
$ make

Once successful, the C++ tests can be run with ./test/tests (if -DBUILD_TESTS=ON is added to cmake .. command).

Python Bindings

If Python bindings are built (see configuration options below), then the clipper Python module will need to be installed before using. This can be done with

$ cd build
$ make pip-install

# or directly using pip (e.g., to control which python version)
$ python3 -m pip install build/bindings/python # 'python3 -m' ensures appropriate pip version is used

Note: if using Python2 (e.g., < ROS Noetic), you must tell pybind11 to use Python2.7. Do this with adding the flag -DPYBIND11_PYTHON_VERSION=2.7 to the cmake .. command. You may have to remove your build directory and start over to ensure nothing is cached. You should see that pybind11 finds a Python2.7 interpreter and libraries.

A Python example notebook can be found in examples.

MATLAB Bindings

If MATLAB is installed on your computer and MATLAB bindings are requested (see configuration options below), then cmake will attempt to find your MATLAB installation and subsequently generate a set of MEX files so that CLIPPER can be used in MATLAB.

Note that in addition to the C++/MEX version of CLIPPER's dense cluster finder, we provide a reference MATLAB version of our projected gradient ascent approach to finding dense clusters.

Please find MATLAB examples here.

Configuring the Build

The following cmake options are available when building CLIPPER:

Option Description Default
BUILD_BINDINGS_PYTHON Uses pybind11 to create Python bindings for CLIPPER ON
BUILD_BINDINGS_MATLAB Attempts to build MEX files which are required for the MATLAB examples. A MATLAB installation is required. Gracefully fails if not found. ON
BUILD_TESTS Builds C++ tests OFF
ENABLE_MKL Attempts to use Intel MKL (if installed) with Eigen for accelerated linear algebra. OFF
ENABLE_BLAS Attempts to use a BLAS with Eigen for accelerated linear algebra. OFF

Note: The options ENABLE_MKL and ENABLE_BLAS are mutually exclusive.

These cmake options can be set using the syntax cmake -DENABLE_MKL=ON .. or using the ccmake . command (both from the build dir).

Performance with MKL vs BLAS

On Intel CPUs, MKL should be preferred as it offers superior performance over other general BLAS packages. Also note that on Ubuntu, OpenBLAS (sudo apt install libopenblas-dev) provides better performance than the default installed blas.

With MKL, we have found an almost 2x improvement in runtime over the MATLAB implementation. On an i9, the C++/MKL implementation can solve problems with 1000 associations in 70 ms.

Note: Currently, MATLAB bindings do not work if either BLAS or MKL are enabled. Python bindings do not work if MKL is enabled.

Including in Another C++ Project

A simple way to include clipper as a shared library in another C++ project is via cmake. This method will automatically clone and build clipper, making the resulting library accessible in your main project. In the project CMakeLists.txt you can add

set(CLIPPER_DIR "${CMAKE_CURRENT_BINARY_DIR}/clipper-download" CACHE INTERNAL "CLIPPER build dir" FORCE)
set(BUILD_BINDINGS_MATLAB OFF CACHE BOOL "")
set(BUILD_TESTS OFF CACHE BOOL "")
set(ENABLE_MKL OFF CACHE BOOL "")
set(ENABLE_BLAS OFF CACHE BOOL "")
configure_file(cmake/clipper.cmake.in ${CLIPPER_DIR}/CMakeLists.txt IMMEDIATE @ONLY)
execute_process(COMMAND "${CMAKE_COMMAND}" -G "${CMAKE_GENERATOR}" . WORKING_DIRECTORY ${CLIPPER_DIR})
execute_process(COMMAND "${CMAKE_COMMAND}" --build . WORKING_DIRECTORY ${CLIPPER_DIR})
add_subdirectory(${CLIPPER_DIR}/src ${CLIPPER_DIR}/build)

where cmake/clipper.cmake.in looks like

cmake_minimum_required(VERSION 3.10)
project(clipper-download NONE)

include(ExternalProject)
ExternalProject_Add(clipper
    GIT_REPOSITORY      "https://github.com/mit-acl/clipper"
    GIT_TAG             master
    SOURCE_DIR          "${CMAKE_CURRENT_BINARY_DIR}/src"
    BINARY_DIR          "${CMAKE_CURRENT_BINARY_DIR}/build"
    CONFIGURE_COMMAND   ""
    BUILD_COMMAND       ""
    INSTALL_COMMAND     ""
    TEST_COMMAND        ""
)

Then, you can link your project with clipper using the syntax target_link_libraries(yourproject clipper).


This research is supported by Ford Motor Company.

Comments
  • Sparsity-aware clipper achieves 6x speed-up on bunny example

    Sparsity-aware clipper achieves 6x speed-up on bunny example

    Hello @plusk01,

    Enjoyed the paper and the examples in this repo. Nice work!

    I took the time to write and contribute a spare-aware version of clipper (sparsity here is with regard to the consistency score matrices M and C). This version achieves ~6x speed-up factor on the bunny (python notebook) example and almost 2.5x on the LargePointCloud example in the tests (see below for exact numbers).

    For Bunny python example:

    Affinity matrix creation took 0.062 seconds CLIPPER selected 41 inliers from 1000 putative associations (precision 1.00, recall 0.82) in 0.092 s

    with sparse-aware clipper:

    Affinity matrix creation took 0.063 seconds sparse-aware CLIPPER selected 40 inliers from 1000 putative associations (precision 1.00, recall 0.80) in 0.014 s Speed-up: 6.780588566204909

    For C++ Tests:

    Let me know your thoughts and if you have any questions.

    Best, Abdullah

    opened by ash-aldujaili 3
  • Orthogonal projected gradient ascent

    Orthogonal projected gradient ascent

    Hello~Thanks for your open source work.

    You mentioned in your paper, orthogonal projected gradient ascent are applied. However, in this code https://github.com/mit-acl/clipper/blob/e1015411ebac4940039e07f8efb0e5c391cc1ade/src/clipper.cpp#L179, it seems to be standard gradient ascent. Am I right?

    opened by qiaozhijian 2
  • Setting diagonal values in the affinity matrix

    Setting diagonal values in the affinity matrix

    Hi,

    Thanks a lot for your great work! I have a question regarding the diagonal values in the affinity matrix. The function CLIPPER::setMatrixData sets to zero all of the values in the diagonal entries of M and C matrices. However, in the paper it is explicated that those values measure the similarity of the data points that association. As I have that information, I'd like to be able to use it.

    I have tried commenting those lines and making clipper work with my forced values in M_ and C_. However, I'm getting wrong results. Could you please let me know how should I set those values? Thanks in advance!

    opened by JoseAndresMR 2
  • make error with matlab bindings

    make error with matlab bindings

    Thanks for the good work. Very interesting and impressive!

    I want to compile with Matlab bindings, and have the issue as follows,

    cmake .. -DBUILD_BINDINGS_MATLAB=ON is succeed

    image

    but after that, make has an error

    image

    opened by HuanYin94 2
  • Invariant choices

    Invariant choices

    Hi,

    Thanks for your great work! I have a question regarding the invariant choices. I am using the python API and cannot find a documentation that list different invariant choices.

    Right now I am using this clipper.invariants.EuclideanDistanceParams() as in the tutorial. Is there any other already implemented invariant? I know we can implement ourselves.

    opened by songanz 2
  • Could not open the PLY file

    Could not open the PLY file

    Hi, Thanks for opening great work. I tried to run the python example but I got the error "[Open3D WARNING] Read PLY failed: unable to open file: ../data/bun1k.ply" and the number of points in the point cloud was zero. I tested with another irrelevant PLY file and open3d just worked fine. Are there any suggestions to fix the problem?

    opened by intrepidChw 2
  • algo updates

    algo updates

    • gradF update: After the most recent optimizations, the gradF update was missing using the gradFnew calculated in the line search. From what I can tell, this fix created no appreciable difference, but values are now aligned with the MATLAB implementation.

    • d steps: changed the update strategy for d steps from using the min (which meant that no elements of the u vector would need to be projected back onto the pos orthant) to using the mean step (which means half of the elements will have to be projected onto the pos orthant). This seems to have produced a significant (2-4x) speedup on the bunny benchmark, with no change on accuracy -- see report below.

    • recale_u0: new parameter added to turn on / off the initial power method step to rescale u0. This does not seem to make an appreciable difference on bunny benchmark.


    this version

    Benchmarking over 20 trials
    
    Benchmarking ρ = 90% [██████████████████████████████████████████████████] 100% [00m:44s]
    
    +-------+---------+---------------+-------------------+---------------+------------+
    | ρ [%] | # assoc | affinity [ms] | dense clique [ms] | precision [%] | recall [%] |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 0     | 64      |  0.19  ±  0.0 |      0.06  ±  0.0 | 100           | 90         |
    | 0     | 256     |  1.05  ±  0.0 |      0.63  ±  0.0 | 100           | 89         |
    | 0     | 512     |  3.95  ±  0.1 |      2.41  ±  0.0 | 100           | 89         |
    | 0     | 1024    | 15.64  ±  1.0 |      9.86  ±  0.1 | 100           | 89         |
    | 0     | 2048    | 94.63  ±  1.4 |     32.53  ±  0.4 | 100           | 89         |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 20    | 64      |  0.20  ±  0.0 |      0.06  ±  0.0 | 100           | 90         |
    | 20    | 256     |  1.88  ±  3.9 |      0.63  ±  0.1 | 99            | 89         |
    | 20    | 512     |  3.69  ±  0.1 |      2.35  ±  0.4 | 100           | 89         |
    | 20    | 1024    | 14.84  ±  0.6 |     13.51  ±  6.3 | 99            | 89         |
    | 20    | 2048    | 85.49  ±  1.5 |     59.20  ± 46.9 | 100           | 89         |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 40    | 64      |  0.19  ±  0.0 |      0.04  ±  0.0 | 100           | 90         |
    | 40    | 256     |  1.00  ±  0.1 |      0.42  ±  0.1 | 100           | 89         |
    | 40    | 512     |  3.46  ±  0.1 |      1.58  ±  0.4 | 100           | 89         |
    | 40    | 1024    | 13.77  ±  0.4 |      9.27  ±  3.6 | 99            | 89         |
    | 40    | 2048    | 71.14  ±  1.1 |     55.42  ± 27.5 | 99            | 89         |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 80    | 64      |  0.29  ±  0.5 |      0.05  ±  0.0 | 100           | 91         |
    | 80    | 256     |  0.89  ±  0.0 |      0.44  ±  0.1 | 100           | 89         |
    | 80    | 512     |  3.22  ±  0.1 |      1.62  ±  0.3 | 99            | 90         |
    | 80    | 1024    | 12.54  ±  0.1 |      5.71  ±  1.3 | 99            | 90         |
    | 80    | 2048    | 66.70  ±  0.5 |     24.55  ± 15.5 | 99            | 89         |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 90    | 64      |  0.18  ±  0.0 |      0.05  ±  0.0 | 100           | 90         |
    | 90    | 256     |  0.95  ±  0.3 |      0.38  ±  0.1 | 99            | 90         |
    | 90    | 512     |  3.17  ±  0.1 |      1.44  ±  0.5 | 99            | 90         |
    | 90    | 1024    | 12.37  ±  0.1 |      6.12  ±  1.7 | 99            | 90         |
    | 90    | 2048    | 66.18  ±  0.3 |     27.09  ± 10.1 | 99            | 90         |
    +-------+---------+---------------+-------------------+---------------+------------+
    

    previous version

    Benchmarking over 20 trials
    
    
    Benchmarking ρ = 90% [██████████████████████████████████████████████████] 100% [00m:54s]                                                                                                                           
    
    +-------+---------+---------------+-------------------+---------------+------------+
    | ρ [%] | # assoc | affinity [ms] | dense clique [ms] | precision [%] | recall [%] |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 0     | 64      |  0.20  ±  0.0 |      0.06  ±  0.0 | 100           | 90         |
    | 0     | 256     |  1.12  ±  0.3 |      0.63  ±  0.3 | 100           | 89         |
    | 0     | 512     |  4.37  ±  2.0 |      2.12  ±  0.9 | 100           | 89         |
    | 0     | 1024    | 16.27  ±  2.8 |     10.21  ±  4.6 | 100           | 89         |
    | 0     | 2048    | 94.61  ±  2.4 |     43.25  ± 19.7 | 100           | 89         |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 20    | 64      |  0.19  ±  0.0 |      0.15  ±  0.0 | 100           | 89         |
    | 20    | 256     |  1.02  ±  0.1 |      2.00  ±  0.5 | 100           | 89         |
    | 20    | 512     |  3.81  ±  0.7 |      8.84  ±  1.9 | 100           | 89         |
    | 20    | 1024    | 14.63  ±  0.6 |     44.83  ± 17.2 | 100           | 89         |
    | 20    | 2048    | 84.86  ±  0.9 |    223.11  ± 63.0 | 99            | 89         |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 40    | 64      |  0.28  ±  0.4 |      0.15  ±  0.0 | 100           | 89         |
    | 40    | 256     |  0.99  ±  0.2 |      1.81  ±  0.3 | 99            | 89         |
    | 40    | 512     |  3.51  ±  0.1 |      7.50  ±  1.8 | 100           | 89         |
    | 40    | 1024    | 13.88  ±  1.0 |     32.58  ±  7.6 | 99            | 89         |
    | 40    | 2048    | 71.31  ±  1.0 |    192.51  ± 44.8 | 99            | 89         |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 80    | 64      |  0.18  ±  0.0 |      0.11  ±  0.0 | 100           | 91         |
    | 80    | 256     |  0.93  ±  0.1 |      1.11  ±  0.2 | 99            | 89         |
    | 80    | 512     |  3.20  ±  0.1 |      4.01  ±  0.7 | 99            | 90         |
    | 80    | 1024    | 12.61  ±  0.2 |     18.62  ±  4.3 | 99            | 89         |
    | 80    | 2048    | 67.52  ±  1.5 |     87.97  ± 15.6 | 99            | 89         |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 90    | 64      |  0.18  ±  0.0 |      0.13  ±  0.0 | 99            | 91         |
    | 90    | 256     |  0.90  ±  0.0 |      1.08  ±  0.2 | 99            | 91         |
    | 90    | 512     |  3.24  ±  0.1 |      3.76  ±  0.5 | 99            | 90         |
    | 90    | 1024    | 12.44  ±  0.1 |     16.57  ±  3.9 | 100           | 90         |
    | 90    | 2048    | 66.41  ±  0.4 |     77.50  ± 15.0 | 99            | 90         |
    +-------+---------+---------------+-------------------+---------------+------------+
    
    opened by plusk01 0
  • Updates

    Updates

    various updates, including a version bump to 0.2.1

    • prepend CMake variables with CLIPPER_
    • use FetchContent in CMake
    • fix python parallelization
    • set M and C during the same function call
    • allows providing an initial condition u0 to gradient optimizer
    • added tests
    opened by plusk01 0
  • refactoring for additional speed improvements

    refactoring for additional speed improvements

    these changes are API breaking

    further speed improvements over (#3) are found by improving the use of sparsity (i.e., only populating the upper triangle) and by limiting the amount of data that is copied.

    note: this PR leaves matlab bindings broken for now

    Sparse and avoiding mat-vec multiplies (#3)

    Benchmarking over 20 trials
    
    
    Benchmarking ρ = 90% [██████████████████████████████████████████████████] 100% [00m:56s]               
    
    +-------+---------+----------------+-------------------+---------------+------------+
    | ρ [%] | # assoc |  affinity [ms] | dense clique [ms] | precision [%] | recall [%] |
    +-------+---------+----------------+-------------------+---------------+------------+
    | 0     | 64      |   0.52  ±  0.9 |      0.07  ±  0.0 | 100           | 89         |
    | 0     | 256     |   1.41  ±  0.4 |      0.92  ±  0.2 | 100           | 89         |
    | 0     | 512     |   5.30  ±  0.3 |      3.62  ±  0.8 | 100           | 89         |
    | 0     | 1024    |  42.30  ±  1.7 |      9.63  ±  1.5 | 100           | 89         |
    | 0     | 2048    | 174.11  ±  9.8 |     47.51  ±  9.4 | 100           | 89         |
    +-------+---------+----------------+-------------------+---------------+------------+
    | 20    | 64      |   0.25  ±  0.0 |      0.24  ±  0.0 | 100           | 89         |
    | 20    | 256     |   1.18  ±  0.1 |      3.89  ±  0.8 | 100           | 89         |
    | 20    | 512     |   5.10  ±  0.3 |     15.27  ±  2.5 | 99            | 89         |
    | 20    | 1024    |  23.18  ±  2.5 |     54.81  ± 11.1 | 99            | 89         |
    | 20    | 2048    | 133.73  ± 12.5 |    345.16  ± 77.3 | 99            | 89         |
    +-------+---------+----------------+-------------------+---------------+------------+
    | 40    | 64      |   5.34  ±  5.6 |      0.25  ±  0.0 | 100           | 89         |
    | 40    | 256     |   1.04  ±  0.1 |      3.36  ±  0.7 | 99            | 89         |
    | 40    | 512     |   4.22  ±  0.4 |     12.42  ±  1.5 | 100           | 89         |
    | 40    | 1024    |  21.38  ±  3.6 |     48.93  ± 11.1 | 99            | 89         |
    | 40    | 2048    |  92.75  ±  4.5 |    264.75  ± 55.5 | 99            | 89         |
    +-------+---------+----------------+-------------------+---------------+------------+
    | 80    | 64      |   0.37  ±  0.6 |      0.16  ±  0.0 | 99            | 91         |
    | 80    | 256     |   0.86  ±  0.1 |      1.71  ±  0.2 | 100           | 90         |
    | 80    | 512     |   3.46  ±  0.3 |      7.44  ±  1.7 | 99            | 89         |
    | 80    | 1024    |  15.76  ±  0.4 |     23.73  ±  3.6 | 99            | 89         |
    | 80    | 2048    |  63.44  ±  4.6 |    113.56  ± 28.4 | 99            | 89         |
    +-------+---------+----------------+-------------------+---------------+------------+
    | 90    | 64      |   0.23  ±  0.0 |      0.17  ±  0.0 | 99            | 91         |
    | 90    | 256     |   0.86  ±  0.1 |      1.64  ±  0.2 | 99            | 90         |
    | 90    | 512     |   3.22  ±  0.4 |      6.25  ±  1.0 | 99            | 90         |
    | 90    | 1024    |  15.01  ±  0.5 |     21.29  ±  2.9 | 99            | 90         |
    | 90    | 2048    |  61.48  ±  3.3 |     93.10  ± 13.6 | 99            | 90         |
    +-------+---------+----------------+-------------------+---------------+------------+
    

    this version

    Benchmarking over 20 trials
    
    
    Benchmarking ρ = 90% [██████████████████████████████████████████████████] 100% [00m:46s]               
    
    +-------+---------+---------------+-------------------+---------------+------------+
    | ρ [%] | # assoc | affinity [ms] | dense clique [ms] | precision [%] | recall [%] |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 0     | 64      |  0.29  ±  0.3 |      0.07  ±  0.0 | 100           | 89         |
    | 0     | 256     |  0.88  ±  0.1 |      0.79  ±  0.2 | 100           | 89         |
    | 0     | 512     |  3.42  ±  0.3 |      3.18  ±  1.1 | 100           | 89         |
    | 0     | 1024    | 16.52  ±  1.2 |     11.79  ±  4.5 | 100           | 89         |
    | 0     | 2048    | 73.97  ±  2.5 |     44.81  ± 20.4 | 100           | 89         |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 20    | 64      |  0.81  ±  2.6 |      0.21  ±  0.0 | 100           | 89         |
    | 20    | 256     |  0.88  ±  0.1 |      3.15  ±  0.7 | 100           | 89         |
    | 20    | 512     |  3.63  ±  0.4 |     13.58  ±  2.3 | 100           | 89         |
    | 20    | 1024    | 16.22  ±  0.3 |     40.62  ± 10.0 | 99            | 89         |
    | 20    | 2048    | 69.55  ±  4.3 |    235.73  ± 50.8 | 99            | 89         |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 40    | 64      |  0.22  ±  0.0 |      0.20  ±  0.0 | 100           | 89         |
    | 40    | 256     |  0.86  ±  0.1 |      2.49  ±  0.3 | 99            | 89         |
    | 40    | 512     |  3.49  ±  0.4 |     11.55  ±  2.3 | 100           | 89         |
    | 40    | 1024    | 15.40  ±  0.4 |     39.22  ±  8.8 | 100           | 89         |
    | 40    | 2048    | 57.30  ±  2.6 |    187.87  ± 50.2 | 99            | 89         |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 80    | 64      |  0.22  ±  0.0 |      0.14  ±  0.0 | 100           | 91         |
    | 80    | 256     |  0.83  ±  0.1 |      1.50  ±  0.2 | 99            | 90         |
    | 80    | 512     |  3.10  ±  0.3 |      5.81  ±  1.0 | 99            | 89         |
    | 80    | 1024    | 15.39  ±  3.1 |     23.94  ±  6.1 | 99            | 89         |
    | 80    | 2048    | 53.72  ±  1.4 |     86.29  ± 20.8 | 99            | 89         |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 90    | 64      |  0.22  ±  0.0 |      0.16  ±  0.0 | 100           | 92         |
    | 90    | 256     |  0.98  ±  0.8 |      1.48  ±  0.2 | 99            | 89         |
    | 90    | 512     |  3.43  ±  1.1 |      5.57  ±  1.0 | 99            | 91         |
    | 90    | 1024    | 14.61  ±  1.2 |     21.31  ±  2.4 | 99            | 90         |
    | 90    | 2048    | 53.18  ±  1.8 |     83.43  ± 15.7 | 99            | 90         |
    +-------+---------+---------------+-------------------+---------------+------------+
    
    opened by plusk01 0
  • benchmark suite

    benchmark suite

    introduces a benchmark suite to test and verify algorithmic changes and their effects on timing and precision/recall.

    For example, when run on 14d0e6ffe72a9cc26b1610a3b72de42893d04fb3, the output looks like

    Benchmarking over 20 trials
    
    
    Benchmarking ρ = 90% [██████████████████████████████████████████████████] 100% [01m:16s]               
    
    +-------+---------+---------------+-------------------+---------------+------------+
    | ρ [%] | # assoc | affinity [ms] | dense clique [ms] | precision [%] | recall [%] |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 0     | 64      |  0.28  ±  0.3 |      0.03  ±  0.0 | 100           | 89         |
    | 0     | 256     |  0.81  ±  0.1 |      0.33  ±  0.0 | 100           | 89         |
    | 0     | 512     |  2.97  ±  0.7 |      1.76  ±  0.0 | 100           | 89         |
    | 0     | 1024    | 21.83  ±  1.2 |     11.33  ±  0.0 | 100           | 89         |
    | 0     | 2048    | 91.30  ±  3.3 |     73.38  ±  4.1 | 100           | 89         |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 20    | 64      |  0.22  ±  0.0 |      0.08  ±  0.0 | 100           | 89         |
    | 20    | 256     |  0.81  ±  0.1 |      1.19  ±  0.2 | 100           | 89         |
    | 20    | 512     |  3.20  ±  0.3 |      8.98  ±  1.3 | 100           | 89         |
    | 20    | 1024    | 22.34  ±  0.6 |     48.96  ±  5.7 | 100           | 89         |
    | 20    | 2048    | 90.52  ±  3.1 |    356.03  ± 60.0 | 99            | 89         |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 40    | 64      |  0.23  ±  0.0 |      0.10  ±  0.0 | 100           | 89         |
    | 40    | 256     |  0.80  ±  0.1 |      1.48  ±  0.1 | 100           | 89         |
    | 40    | 512     |  3.32  ±  0.4 |     10.43  ±  1.4 | 99            | 89         |
    | 40    | 1024    | 22.54  ±  0.3 |     57.56  ±  6.4 | 100           | 89         |
    | 40    | 2048    | 90.50  ±  3.1 |    415.87  ± 71.5 | 99            | 89         |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 80    | 64      |  0.21  ±  0.0 |      0.14  ±  0.0 | 99            | 91         |
    | 80    | 256     |  0.80  ±  0.1 |      1.74  ±  0.2 | 100           | 90         |
    | 80    | 512     |  3.48  ±  0.4 |     12.50  ±  1.5 | 99            | 89         |
    | 80    | 1024    | 22.26  ±  0.2 |     77.04  ± 18.3 | 99            | 89         |
    | 80    | 2048    | 90.14  ±  2.3 |    473.96  ± 66.6 | 99            | 89         |
    +-------+---------+---------------+-------------------+---------------+------------+
    | 90    | 64      |  0.22  ±  0.0 |      0.24  ±  0.0 | 100           | 92         |
    | 90    | 256     |  0.81  ±  0.1 |      1.94  ±  0.3 | 99            | 90         |
    | 90    | 512     |  3.62  ±  0.2 |     13.95  ±  1.3 | 99            | 90         |
    | 90    | 1024    | 22.17  ±  0.2 |     78.01  ± 10.8 | 99            | 90         |
    | 90    | 2048    | 90.82  ±  2.4 |    521.04  ± 81.2 | 99            | 90         |
    +-------+---------+---------------+-------------------+---------------+------------+
    
    opened by plusk01 0
  • Refactor

    Refactor

    • breaking api changes
    • generalizes KnownScalePointCloud to EuclideanDistance invariant consistency scoring
    • replaces PlaneCloud with PointNormalDistance invariant consistency scoring
    • unifies affinity matrix construction
    • allows extending the PairwiseInvariant base class in python for rapid prototyping (although parallelism is disabled when using a Python invariant due to GIL, therefore can be on the order of seconds or even tens of seconds - still, nice for quick tests before implementing in C++)
    • incorporates mindist criterion into EuclideanDistance
    • make notes about MKL/BLAS breaking MATLAB and MKL breaking Python
    opened by plusk01 0
Owner
MIT Aerospace Controls Laboratory
see more code at https://gitlab.com/mit-acl
MIT Aerospace Controls Laboratory
Repo for CVPR2021 paper "QPIC: Query-Based Pairwise Human-Object Interaction Detection with Image-Wide Contextual Information"

QPIC: Query-Based Pairwise Human-Object Interaction Detection with Image-Wide Contextual Information by Masato Tamura, Hiroki Ohashi, and Tomoaki Yosh

null 105 Dec 23, 2022
git《Learning Pairwise Inter-Plane Relations for Piecewise Planar Reconstruction》(ECCV 2020) GitHub:

Learning Pairwise Inter-Plane Relations for Piecewise Planar Reconstruction Code for the ECCV 2020 paper by Yiming Qian and Yasutaka Furukawa Getting

null 37 Dec 4, 2022
Pairwise model for commonlit competition

Pairwise model for commonlit competition To run: - install requirements - create input directory with train_folds.csv and other competition data - cd

abhishek thakur 45 Aug 31, 2022
Paper: Cross-View Kernel Similarity Metric Learning Using Pairwise Constraints for Person Re-identification

Cross-View Kernel Similarity Metric Learning Using Pairwise Constraints for Person Re-identification T M Feroz Ali, Subhasis Chaudhuri, ICVGIP-20-21

T M Feroz Ali 3 Jun 17, 2022
Information-Theoretic Multi-Objective Bayesian Optimization with Continuous Approximations

Information-Theoretic Multi-Objective Bayesian Optimization with Continuous Approximations Requirements The code is implemented in Python and requires

null 1 Nov 3, 2021
A Game-Theoretic Perspective on Risk-Sensitive Reinforcement Learning

Officile code repository for "A Game-Theoretic Perspective on Risk-Sensitive Reinforcement Learning"

Mathieu Godbout 1 Nov 19, 2021
PyTorch Implementation of the SuRP algorithm by the authors of the AISTATS 2022 paper "An Information-Theoretic Justification for Model Pruning"

PyTorch Implementation of the SuRP algorithm by the authors of the AISTATS 2022 paper "An Information-Theoretic Justification for Model Pruning".

Berivan Isik 8 Dec 8, 2022
CLEAR algorithm for multi-view data association

CLEAR: Consistent Lifting, Embedding, and Alignment Rectification Algorithm The Matlab, Python, and C++ implementation of the CLEAR algorithm, as desc

MIT Aerospace Controls Laboratory 30 Jan 2, 2023
null 190 Jan 3, 2023
Pytorch Implementation for NeurIPS (oral) paper: Pixel Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation

Pixel-Level Cycle Association This is the Pytorch implementation of our NeurIPS 2020 Oral paper Pixel-Level Cycle Association: A New Perspective for D

null 87 Oct 19, 2022
Implementation of association rules mining algorithms (Apriori|FPGrowth) using python.

Association Rules Mining Using Python Implementation of association rules mining algorithms (Apriori|FPGrowth) using python. As a part of hw1 code in

Pre 2 Nov 10, 2021
Groceries ARL: Association Rules (Birliktelik Kuralı)

Groceries_ARL Association Rules (Birliktelik Kuralı) Birliktelik kuralları, mark

Şebnem 5 Feb 8, 2022
Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy" (ICLR 2022 Spotlight)

About Code release for Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy (ICLR 2022 Spotlight)

THUML @ Tsinghua University 221 Dec 31, 2022
Code for our CVPR 2022 Paper "GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection"

GEN-VLKT Code for our CVPR 2022 paper "GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection". Contributed by Yue Lia

Yue Liao 47 Dec 4, 2022
LBK 35 Dec 26, 2022
The source code and data of the paper "Instance-wise Graph-based Framework for Multivariate Time Series Forecasting".

IGMTF The source code and data of the paper "Instance-wise Graph-based Framework for Multivariate Time Series Forecasting". Requirements The framework

Wentao Xu 24 Dec 5, 2022
DeepGNN is a framework for training machine learning models on large scale graph data.

DeepGNN Overview DeepGNN is a framework for training machine learning models on large scale graph data. DeepGNN contains all the necessary features in

Microsoft 45 Jan 1, 2023
Code release for our paper, "SimNet: Enabling Robust Unknown Object Manipulation from Pure Synthetic Data via Stereo"

SimNet: Enabling Robust Unknown Object Manipulation from Pure Synthetic Data via Stereo Thomas Kollar, Michael Laskey, Kevin Stone, Brijen Thananjeyan

null 68 Dec 14, 2022