TorchPQ is a python library for Approximate Nearest Neighbor Search (ANNS) and Maximum Inner Product Search (MIPS) on GPU using Product Quantization (PQ) algorithm.

Overview

TorchPQ

TorchPQ is a python library for Approximate Nearest Neighbor Search (ANNS) and Maximum Inner Product Search (MIPS) on GPU using Product Quantization (PQ) algorithm. TorchPQ is implemented mainly with PyTorch, with some extra CUDA kernels to accelerate clustering, indexing and searching.

Install

First install a version of CuPy library that matches your CUDA version

pip install cupy-cuda90
pip install cupy-cuda100
pip install cupy-cuda101
...

Then install TorchPQ

pip install torchpq

for a full list of cupy-cuda versions, please go to Installation Guide

Quick Start

IVFPQ

InVerted File Product Quantization (IVFPQ) is a type of ANN search algorithm that is designed to do fast and efficient vector search in million, or even billion scale vector sets. check the original paper for more details.

Training

from torchpq import IVFPQ

n_data = 1000000 # number of data points
d_vector = 128 # dimentionality / number of features

index = IVFPQ(
  d_vector=d_vector,
  n_subvectors=64,
  n_cq_clusters=1024,
  n_pq_clusters=256,
  blocksize=128,
  distance="euclidean",
)

x = torch.randn(d_vector, n_data, device="cuda:0")
index.train(x)

There are some important parameters that need to be explained:

  • d_vector: dimentionality of input vectors. there are 2 constraints on d_vector: (1) it needs to be divisible by n_subvectors; (2) it needs to be a multiple of 4.*
  • n_subvectors: number of subquantizers, essentially this is the byte size of each quantized vector, 64 byte per vector in the above example.**
  • n_cq_clusters: number of coarse quantizer clusters
  • n_pq_clusters: number of product quantizer clusters, this is assumed to be 256 throughout the entire project, and should NOT be changed.
  • blocksize: initial capacity assigned to each voronoi cell of coarse quantizer. n_cq_clusters * blocksize is the number of vectors that can be stored initially. if any cell has reached its capacity, that cell will be automatically expanded. If you need to add vectors frequently, a larger value for blocksize is recommended.

Remember that the shape of any tensor that contains data points has to be [d_vector, n_data].

* the second constraint could be removed in the future
** actual byte size would be (n_subvectors+9) bytes, 8 bytes for ID and 1 byte for is_empty

Adding new vectors

ids = torch.arange(n_data, device="cuda")
index.add(x, input_ids=ids)

Each ID in ids needs to be a unique int64 (torch.long) value that identifies a vector in x. if input_ids is not provided, it will be set to torch.arange(n_data, device="cuda") + previous_max_id

Removing vectors

index.remove(ids)

index.remove(ids) will virtually remove vectors with specified ids from storage. It ignores ids that doesn't exist.

Topk search

index.n_probe = 32
n_query = 10000
query = torch.randn(d_vector, n_query, device="cuda:0")
topk_values, topk_ids = index.topk(query, k=100)
  • when distance="inner", topk_values are inner product of queries and topk closest data points.
  • when distance="euclidean", topk_values are negative squared L2 distance between queries and topk closest data points.
  • when distance="manhattan", topk_values are negative L1 distance between queries and topk closest data points.
  • when distance="cosine", topk_values are cosine similarity between queries and topk closest data points.

Encode and Decode

you can use IVFPQ as a vector codec for lossy compression of vectors

code = index.encode(query)   # compression
reconstruction = index.decode(code) # reconstruction

Save and Load

Most of the TorchPQ modules are inherited from torch.nn.Module, this means you can save and load them just like a regular pytorch model.

# Save to PATH
torch.save(index.state_dict(), PATH)
# Load from PATH
index.load_state_dict(torch.load(PATH))

Clustering

K-means

from torchpq.kmeans import KMeans
import torch

n_data = 1000000 # number of data points
d_vector = 128 # dimentionality / number of features
x = torch.randn(d_vector, n_data, device="cuda")

kmeans = KMeans(n_clusters=4096, distance="euclidean")
labels = kmeans.fit(x)

Notice that the shape of the tensor that contains data points has to be [d_vector, n_data], this is consistant in TorchPQ.

Multiple concurrent K-means

Sometimes, we have multiple independent datasets that need to be clustered, instead of running multiple KMeans sequentianlly, we can perform multiple kmeans concurrently with MultiKMeans

from torchpq.kmeans import MultiKMeans
import torch

n_data = 1000000
n_kmeans = 16
d_vector = 64
x = torch.randn(n_kmeans, d_vector, n_data, device="cuda")
kmeans = MultiKMeans(n_clusters=256, distance="euclidean")
labels = kmeans.fit(x)

Prediction with K-means

labels = kmeans.predict(x)

Benchmark

All experiments were performed with a Tesla T4 GPU.

SIFT1M

IVFPQ

Faiss is one of the most well known ANN search libraries, and it also has a GPU implementation of IVFPQ, so we did some comparison experiments with faiss.

How to read the plot:

  • the plot format follows the style of ann-benchmarks
  • X axis is [email protected], Y axis is queries/second
  • the closer to the top right corner the better
  • indexes with same parameters from different libraries have similar colors.
  • different libraries have different line styles (TorchPQ is solid line with circle marker, faiss is dashed line with triangle marker)
  • each node on the line represents a different n_probe, starting from 1 at the left most node, and multiplied by 2 at the next node. (n_probe = 1,2,4,8,16,...)

Summary:

  • for all the IVF16384 variants, torchpq outperforms faiss when n_probe > 16.
  • for IVF4096, torchpq has lower [email protected] compared to faiss, this could be caused by not encoding residuals. An option to encode residuals will be added soon.

IVFPQ+R

GIST1M

coming soon...

Issues
  • Import Error in Minibatch K means

    Import Error in Minibatch K means

    just tried this today

    Traceback (most recent call last):
      File "/datadrive/phd-projects/PiCIE/eval_minimal.py", line 18, in <module>
        from torchpq.clustering import MinibatchKMeans
      File "/anaconda/envs/py38_pytorch/lib/python3.8/site-packages/torchpq/__init__.py", line 11, in <module>
        from . import experimental
    ImportError: cannot import name 'experimental' from partially initialized module 'torchpq' (most likely due to a circular import) (/anaconda/envs/py38_pytorch/lib/python3.8/site-packages/torchpq/__init__.py)
    
    opened by mhamilton723 3
  • cupy.util doesn't exist anymore.

    cupy.util doesn't exist anymore.

    When getting the latest version of cupy with pip I get an error saying that cupy.util doesn't exist.
    The function seems to have been moved: https://docs.cupy.dev/en/stable/reference/generated/cupy.memoize.html

    opened by francisr 1
  • remove .util from cupy decorator and bump version in setup.py

    remove .util from cupy decorator and bump version in setup.py

    pip install is not picking up @francisr's changes and he missed one of the other cp.util decorators which causes the following error when trying to import torchpq with cupy-cuda112: AttributeError: module 'cupy' has no attribute 'util'

    @DeMoriarty could you ensure the pip package is updated with this pull request please? Thanks!

    opened by McHughes288 1
  • Error while importing torchpq.clustering

    Error while importing torchpq.clustering

    I see the following error when I try to import torchpq.clustering.

    ---------------------------------------------------------------------------
    FileNotFoundError                         Traceback (most recent call last)
    /tmp/ipykernel_7302/3376715144.py in <module>
    ----> 1 from torchpq import clustering
    
    ~/.local/lib/python3.8/site-packages/torchpq/__init__.py in <module>
         18 from .CustomModule import CustomModule
         19 
    ---> 20 topk = fn.Topk()
    
    ~/.local/lib/python3.8/site-packages/torchpq/fn/Topk.py in __init__(self)
          4 class Topk:
          5   def __init__(self):
    ----> 6     self._top32_cuda = TopkSelectCuda(
          7       tpb = 32,
          8       queue_capacity = 4,
    
    ~/.local/lib/python3.8/site-packages/torchpq/kernels/TopkSelectCuda.py in __init__(self, tpb, queue_capacity, buffer_size)
         23     self.buffer_size = buffer_size
         24 
    ---> 25     with open(get_absolute_path("kernels", "cuda", "topk_select.cu"),'r') as f: ###
         26       self.kernel = f.read()
         27 
    
    FileNotFoundError: [Errno 2] No such file or directory: '/home/XXXX/.local/lib/python3.8/site-packages/torchpq/kernels/cuda/topk_select.cu'
    

    Installation details:

    • Used pip to install cupy-cuda110,
    • pytorch version: 1.7.1
    • Cuda: 11.0

    However, I am able to run from torchpq.index import IVFPQIndex without any issue. Can you please help me fix this?

    opened by abhinavvs 1
  • Fix undefined device variable

    Fix undefined device variable

    device -> data.device.

    opened by francisr 0
  • merge main to v0.2dev

    merge main to v0.2dev

    opened by DeMoriarty 0
  • Fix bug in MinibatchKMeans.py

    Fix bug in MinibatchKMeans.py

    opened by mhamilton723 0
  • fix minibatch clustering

    fix minibatch clustering

    opened by mhamilton723 0
  • V030release

    V030release

    null

    opened by DeMoriarty 0
  • torchPQ in a deep nets

    torchPQ in a deep nets

    Hello,

    Thank you very much for sharing the project. I am interested in using torchPQ inside a deep nets (implemented in pytorch) where in each forward pass, I will call torchPQ. I was wondering is this possible?

    Also, I saw https://ai.googleblog.com/2020/07/announcing-scann-efficient-vector.html, have you tried some comparison with other methods?

    Thank you!

    opened by Chen-Cai-OSU 9
Releases(v0.3.0)
  • v0.3.0(Oct 22, 2021)

  • v0.2.0.2(Oct 22, 2021)

  • v0.2.0(Jul 11, 2021)

    What's new in v0.2.0?

    1. IVFPQIndex search speed is greatly improved, now at least 2 times faster than before

    2. IVFPQIndex now supports encoding residuals, pass pq_use_residual=True to initializer method in order to toggle residual encoding on. this improves recall rate, espcially for low code sizes. You can see the difference between residual encoding on and off from benchmark results

    3. Added new submodules, such as index, clustering, codec, transform and more

    4. old implementations from v0.1 are moved to torchpq/legacy/

    5. more thorough benchmark results on sift1m dataset

    Source code(tar.gz)
    Source code(zip)
Optimal space decomposition based-product quantization for approximate nearest neighbor search

Optimal space decomposition based-product quantization for approximate nearest neighbor search Abstract Product quantization(PQ) is an effective neare

Mylove 1 Nov 19, 2021
Simple and Effective Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning

structshot Code and data for paper "Simple and Effective Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning", Yi Yang and Arz

ASAPP Research 25 Nov 17, 2021
Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)

Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)

Junxian He 34 Nov 23, 2021
K-Nearest Neighbor in Pytorch

Pytorch KNN CUDA 2019/11/02 This repository will no longer be maintained as pytorch supports sort() and kthvalue on tensors. git clone https://github.

Chris Choy 64 Nov 18, 2021
Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk

Annoy Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given quer

Spotify 9.2k Nov 24, 2021
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.

HAWQ: Hessian AWare Quantization HAWQ is an advanced quantization library written for PyTorch. HAWQ enables low-precision and mixed-precision uniform

Zhen Dong 184 Nov 22, 2021
DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight or group of weights, in order to achieve a given trade-off between model size and accuracy.

Differentiable Model Compression via Pseudo Quantization Noise DiffQ performs differentiable quantization using pseudo quantization noise. It can auto

Facebook Research 118 Nov 25, 2021
With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function. At the moment, only TensorFlow sequential models are supported. Interfaces to either the Pyomo or Gurobi modeling environments are offered.

ChemEngAI 23 Nov 18, 2021
Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization Algorithm,Immune Algorithm, Artificial Fish Swarm Algorithm, Differential Evolution and TSP(Traveling salesman)

scikit-opt Swarm Intelligence in Python (Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Algorithm, Immune Algorithm,A

郭飞 2.7k Nov 22, 2021
PyTorch implementation of Algorithm 1 of "On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models"

Code for On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models This repository will reproduce the main results from our pape

Mitch Hill 23 Oct 13, 2021
Implementation of fast algorithms for Maximum Spanning Tree (MST) parsing that includes fast ArcMax+Reweighting+Tarjan algorithm for single-root dependency parsing.

Fast MST Algorithm Implementation of fast algorithms for (Maximum Spanning Tree) MST parsing that includes fast ArcMax+Reweighting+Tarjan algorithm fo

Miloš Stanojević 8 Nov 10, 2021
Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains. Includes the official implementation of the Soft Actor-Critic algorithm.

Softlearning Softlearning is a deep reinforcement learning toolbox for training maximum entropy policies in continuous domains. The implementation is

Robotic AI & Learning Lab Berkeley 844 Nov 26, 2021
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.

mtomo Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation.

Katsuya Hyodo 23 Jul 17, 2021
High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.

Anakin2.0 Welcome to the Anakin GitHub. Anakin is a cross-platform, high-performance inference engine, which is originally developed by Baidu engineer

null 493 Nov 30, 2021
Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO)

V-MPO Simple code to demonstrate Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) in Pyt

Nugroho Dewantoro 3 Oct 23, 2021
Predicting path with preference based on user demonstration using Maximum Entropy Deep Inverse Reinforcement Learning in a continuous environment

Preference-Planning-Deep-IRL Introduction Check my portfolio post Dependencies Gym stable-baselines3 PyTorch Usage Take Demonstration python3 record.

Tianyu Li 1 Oct 28, 2021
Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE

SMU A Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE arXiv https://arxiv.org/abs/211

Fuhang 3 Nov 17, 2021
Code for Subgraph Federated Learning with Missing Neighbor Generation (NeurIPS 2021)

To run the code Unzip the package to your local directory; Run 'pip install -r requirements.txt' to download required packages; Open file ~/nips_code/

null 2 Nov 23, 2021