SPTAG: A library for fast approximate nearest neighbor search

Overview

SPTAG: A library for fast approximate nearest neighbor search

MIT licensed Build status

SPTAG

SPTAG (Space Partition Tree And Graph) is a library for large scale vector approximate nearest neighbor search scenario released by Microsoft Research (MSR) and Microsoft Bing.

architecture

Introduction

This library assumes that the samples are represented as vectors and that the vectors can be compared by L2 distances or cosine distances. Vectors returned for a query vector are the vectors that have smallest L2 distance or cosine distances with the query vector.

SPTAG provides two methods: kd-tree and relative neighborhood graph (SPTAG-KDT) and balanced k-means tree and relative neighborhood graph (SPTAG-BKT). SPTAG-KDT is advantageous in index building cost, and SPTAG-BKT is advantageous in search accuracy in very high-dimensional data.

How it works

SPTAG is inspired by the NGS approach [WangL12]. It contains two basic modules: index builder and searcher. The RNG is built on the k-nearest neighborhood graph [WangWZTG12, WangWJLZZH14] for boosting the connectivity. Balanced k-means trees are used to replace kd-trees to avoid the inaccurate distance bound estimation in kd-trees for very high-dimensional vectors. The search begins with the search in the space partition trees for finding several seeds to start the search in the RNG. The searches in the trees and the graph are iteratively conducted.

Highlights

  • Fresh update: Support online vector deletion and insertion
  • Distributed serving: Search over multiple machines

Build

Requirements

  • swig >= 3.0
  • cmake >= 3.12.0
  • boost >= 1.67.0

Fast clone

set GIT_LFS_SKIP_SMUDGE=1
git clone https://github.com/microsoft/SPTAG

OR

git config --global filter.lfs.smudge "git-lfs smudge --skip -- %f"
git config --global filter.lfs.process "git-lfs filter-process --skip"

Install

For Linux:

mkdir build
cd build && cmake .. && make

It will generate a Release folder in the code directory which contains all the build targets.

For Windows:

mkdir build
cd build && cmake -A x64 ..

It will generate a SPTAGLib.sln in the build directory. Compiling the ALL_BUILD project in the Visual Studio (at least 2019) will generate a Release directory which contains all the build targets.

For detailed instructions on installing Windows binaries, please see here

Using Docker:

docker build -t sptag .

Will build a docker container with binaries in /app/Release/.

Verify

Run the SPTAGTest (or Test.exe) in the Release folder to verify all the tests have passed.

Usage

The detailed usage can be found in Get started. There is also an end-to-end tutorial for building vector search online service using Python Wrapper in Python Tutorial. The detailed parameters tunning can be found in Parameters.

References

Please cite SPTAG in your publications if it helps your research:

@inproceedings{ChenW21,
  author = {Qi Chen and 
            Bing Zhao and 
            Haidong Wang and 
            Mingqin Li and 
            Chuanjie Liu and 
            Zengzhong Li and 
            Mao Yang and 
            Jingdong Wang},
  title = {SPANN: Highly-efficient Billion-scale Approximate Nearest Neighbor Search},
  booktitle = {35th Conference on Neural Information Processing Systems (NeurIPS 2021)},
  year = {2021}
}

@manual{ChenW18,
  author    = {Qi Chen and
               Haidong Wang and
               Mingqin Li and 
               Gang Ren and
               Scarlett Li and
               Jeffery Zhu and
               Jason Li and
               Chuanjie Liu and
               Lintao Zhang and
               Jingdong Wang},
  title     = {SPTAG: A library for fast approximate nearest neighbor search},
  url       = {https://github.com/Microsoft/SPTAG},
  year      = {2018}
}

@inproceedings{WangL12,
  author    = {Jingdong Wang and
               Shipeng Li},
  title     = {Query-driven iterated neighborhood graph search for large scale indexing},
  booktitle = {ACM Multimedia 2012},
  pages     = {179--188},
  year      = {2012}
}

@inproceedings{WangWZTGL12,
  author    = {Jing Wang and
               Jingdong Wang and
               Gang Zeng and
               Zhuowen Tu and
               Rui Gan and
               Shipeng Li},
  title     = {Scalable k-NN graph construction for visual descriptors},
  booktitle = {CVPR 2012},
  pages     = {1106--1113},
  year      = {2012}
}

@article{WangWJLZZH14,
  author    = {Jingdong Wang and
               Naiyan Wang and
               You Jia and
               Jian Li and
               Gang Zeng and
               Hongbin Zha and
               Xian{-}Sheng Hua},
  title     = {Trinary-Projection Trees for Approximate Nearest Neighbor Search},
  journal   = {{IEEE} Trans. Pattern Anal. Mach. Intell.},
  volume    = {36},
  number    = {2},
  pages     = {388--403},
  year      = {2014
}

Contribute

This project welcomes contributions and suggestions from all the users.

We use GitHub issues for tracking suggestions and bugs.

License

The entire codebase is under MIT license

Comments
  • Test search with metadata - can someone explain what metadata is stored/associated?

    Test search with metadata - can someone explain what metadata is stored/associated?

    I am still trying to figure out SPTAG, but unclear on what the metadata methods provide. for example in the Test() method in sample:

    the metadata is generated thus:

    for i in range(n):
           m += str(i) + '\n'
       m = m.encode()
    

    does mean each line is associated with each row in the vectors? can someone provide a simple example of storing metadata with the vectors?

    question 
    opened by shashi-netra 5
  • Error while running cmake - undefined reference to 'SPTAG::Helper::DefaultReader::DefaultReader

    Error while running cmake - undefined reference to 'SPTAG::Helper::DefaultReader::DefaultReader

    Describe the bug Error while cmake:

    image

    Desktop (please complete the following information):

    • OS: Linux (Ubuntu)
    • Version 18.04

    I am getting this same error while running on docker and also on Ubuntu machine 18.04

    All the pre-requisite, cmake, boost and swig are installed properly. Also, attached the CMakeOutput.log

    CMakeOutput.log

    customer raised answered 
    opened by javedsha 4
  • Minor rework of SpinLock class for making it more efficient on Unlock()

    Minor rework of SpinLock class for making it more efficient on Unlock()

    • atomic_flag is initialized using the recommended ATOMIC_FLAG_INIT
    • Unlock() function clears the atomic_flag using more efficient std::memory_order_acquire
    • Unlock() and Lock() methods are now both noexcept
    • Removed unnecessary include files
    opened by noSTALKER 4
  • boost 1.70.0 version cause cmake error. fixed install boost 1.67.0

    boost 1.70.0 version cause cmake error. fixed install boost 1.67.0

    Describe the bug When I install SPTAG with boost 1.70.0 version cause cmake below error message on Ubuntu machine.

    /usr/local/include/boost/asio/detail/io_object_impl.hpp:88:53: error: ‘class boost::asio::execution_context’ has no member named ‘get_executor’

    Removed 1.70.0 and Installed boost 1.67.0, solved the build issue.

    Here is the detail error message.

    /usr/local/include/boost/asio/io_context_strand.hpp:89:19: note:   no known conversion for argument 1 from ‘boost::asio::execution_context’ to ‘const boost::asio::io_context::strand&’
    
    In file included from /usr/local/include/boost/asio/basic_socket.hpp:21:0,
                     from /usr/local/include/boost/asio/basic_socket_acceptor.hpp:19,
                     from /usr/local/include/boost/asio/ip/tcp.hpp:19,
                     from /home/USERNAME/project/SPTAG/AnnService/inc/Socket/Connection.h:13,
                     from /home/USERNAME/project/SPTAG/AnnService/src/Socket/Connection.cpp:4:
    
    /usr/local/include/boost/asio/detail/io_object_impl.hpp: In instantiation of ‘boost::asio::detail::io_object_impl<IoObjectService, Executor>::io_object_impl(ExecutionContext&, typename std::enable_if<std::is_convertible<ExecutionContext&, boost::asio::execution_context&>::value>::type*) [with ExecutionContext = boost::asio::execution_context; IoObjectService = boost::asio::detail::deadline_timer_service<boost::asio::time_traits<boost::posix_time::ptime> >; Executor = boost::asio::executor; typename std::enable_if<std::is_convertible<ExecutionContext&, boost::asio::execution_context&>::value>::type = void]’:
    
    /usr/local/include/boost/asio/basic_deadline_timer.hpp:174:20:   required from ‘boost::asio::basic_deadline_timer<Time, TimeTraits, Executor>::basic_deadline_timer(ExecutionContext&, typename std::enable_if<std::is_convertible<ExecutionContext&, boost::asio::execution_context&>::value>::type*) [with ExecutionContext = boost::asio::execution_context; Time = boost::posix_time::ptime; TimeTraits = boost::asio::time_traits<boost::posix_time::ptime>; Executor = boost::asio::executor; typename std::enable_if<std::is_convertible<ExecutionContext&, boost::asio::execution_context&>::value>::type = void]’
    
    /home/USERNAME/project/SPTAG/AnnService/src/Socket/Connection.cpp:29:31:   required from here
    
    /usr/local/include/boost/asio/detail/io_object_impl.hpp:88:53: error: ‘class boost::asio::execution_context’ has no member named ‘get_executor’
             is_same<ExecutionContext, io_context>::value)
                                                         ^
    
    AnnService/CMakeFiles/server.dir/build.make:348: recipe for target 'AnnService/CMakeFiles/server.dir/src/Socket/Connection.cpp.o' failed
    make[2]: *** [AnnService/CMakeFiles/server.dir/src/Socket/Connection.cpp.o] Error 1
    CMakeFiles/Makefile2:176: recipe for target 'AnnService/CMakeFiles/server.dir/all' failed
    make[1]: *** [AnnService/CMakeFiles/server.dir/all] Error 2
    Makefile:129: recipe for target 'all' failed
    make: *** [all] Error 2
    

    To Reproduce Steps to reproduce the behavior: Install boost 1.70.0 and run install instruction on https://github.com/microsoft/SPTAG

    mkdir build
    cd build && cmake .. && make
    

    Test OS version information: Microsoft Azure default Ubuntu 14 distribution OS: Linux dw-ubuntu14 4.15.0-1045-azure #49-Ubuntu SMP Mon May 13 16:30:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

    Boost install steps I followed https://stackoverflow.com/questions/12578499/how-to-install-boost-on-ubuntu/24086375#24086375

    Removed 1.70.0 and Installed boost 1.67.0, solved the build issue.

    opened by CloudBreadPaPa 4
  • support data compression & delta-encoding of posting lists

    support data compression & delta-encoding of posting lists

    New Features in This PR

    • Delta-encoding
    • Data Compression/Decompression with zstd
      • share dictionary or not
    • Rearrange vid/vector in the posting list:
      • vid0, vector0, vid1, vector1... -> vector0, vector1..., vid0, vid1...

    All these features are by default disabled

    Config format:

    [BuildSSDIndex]
    EnableDeltaEncoding=true
    EnablePostingListRearrange=true
    EnableDataCompression=true
    EnableDictTraining=true
    MinDictTrainingBufferSize=1024000
    DictBufferCapacity=10240
    ZstdCompressLevel=19
    

    Evaluation

    tail_30M

    • no dict share, no delta-encoding, CompressLevel: 0, compression ratio: 0.7326
    • MinDictTrainingBufferSize=1024000, DictBufferCapacity: 1024. CompressLevel: 19, compression ratio: 0.7235
    • MinDictTrainingBufferSize=102400, DictBufferCapacity: unknown. CompressLevel: 19, compression ratio: 0.7250
    • MinDictTrainingBufferSize=1024000, DictBufferCapacity: 10240. CompressLevel: 19, compression ratio: 0.7254

    precision_30M

    | EnableDeltaEncoding | EnableDataCompression | Compression Ratio | Avg Search Latency | Latency Regression | | ------------------- | --------------------- | ----------------- | ------------------ | ------------------ | | False | False | 1 | 1.953 | 0 | | False | True | 0.7437 | 2.107 | ~8% | | True | True | 0.7314 | 2.438 | ~25% |

    Key Observations

    • Regression on the search latency:
      • avg: 1.953 -> 2.438
    • Less disk page access:
      • avg: 63.368 -> 52.865

    Notes

    • config detail: MinDictTrainingBufferSize=1024000, DictBufferCapacity: 10240. CompressLevel: 19
    • gzip: 0.7401 (tested with dumped posting lists, with delta-encoding)

    Evaluation Details on Precision_30M

    • Without Delta-encoding & Decompress
    Head Latency Distribution:
    [1] Avg 50tiles 90tiles 95tiles 99tiles 99.9tiles       Max
    [1] 1.751       1.621   2.132   2.290   3.519   3.981   4.284
    [1]
    Ex Latency Distribution:
    [1] Avg 50tiles 90tiles 95tiles 99tiles 99.9tiles       Max
    [1] 0.202       0.196   0.235   0.249   0.318   0.394   0.453
    [1]
    Total Latency Distribution:
    [1] Avg 50tiles 90tiles 95tiles 99tiles 99.9tiles       Max
    [1] 1.953       1.815   2.359   2.519   3.843   4.291   4.603
    [1]
    Total Disk Page Access Distribution:
    [1] Avg 50tiles 90tiles 95tiles 99tiles 99.9tiles       Max
    [1] 63.368        63      74      78      83      89      92
    [1]
    Total Disk IO Distribution:
    [1] Avg 50tiles 90tiles 95tiles 99tiles 99.9tiles       Max
    [1] 32.000        32      32      32      32      32      32
    
    • Without Delta-encoding, with data-compression

    BuildIndex: Total used time: 121.88 minutes (about 2.03 hours)

    Head Latency Distribution:
    [1] Avg 50tiles 90tiles 95tiles 99tiles 99.9tiles       Max
    [1] 1.720       1.599   2.129   2.224   3.381   3.740   4.138
    [1]
    Ex Latency Distribution:
    [1] Avg 50tiles 90tiles 95tiles 99tiles 99.9tiles       Max
    [1] 0.387       0.382   0.454   0.478   0.555   0.794   1.925
    [1]
    Total Latency Distribution:
    [1] Avg 50tiles 90tiles 95tiles 99tiles 99.9tiles       Max
    [1] 2.107       1.983   2.546   2.652   3.972   4.350   4.690
    [1]
    Total Disk Page Access Distribution:
    [1] Avg 50tiles 90tiles 95tiles 99tiles 99.9tiles       Max
    [1] 53.318        53      63      66      72      81      87
    [1]
    Total Disk IO Distribution:
    [1] Avg 50tiles 90tiles 95tiles 99tiles 99.9tiles       Max
    [1] 32.000        32      32      32      32      32      32
    
    • With delta-encoding & data-compression BuildIndex: Total used time: 121.60 minutes (about 2.03 hours).
    Head Latency Distribution:
    [1] Avg 50tiles 90tiles 95tiles 99tiles 99.9tiles       Max
    [1] 1.706       1.607   2.073   2.180   2.407   3.611   4.305
    [1]
    Ex Latency Distribution:
    [1] Avg 50tiles 90tiles 95tiles 99tiles 99.9tiles       Max
    [1] 0.732       0.724   0.878   0.925   1.029   1.323   3.567
    [1]
    Total Latency Distribution:
    [1] Avg 50tiles 90tiles 95tiles 99tiles 99.9tiles       Max
    [1] 2.438       2.352   2.846   2.973   3.272   4.693   6.239
    [1]
    Total Disk Page Access Distribution:
    [1] Avg 50tiles 90tiles 95tiles 99tiles 99.9tiles       Max
    [1] 52.847        52      63      66      72      80      85
    [1]
    Total Disk IO Distribution:
    [1] Avg 50tiles 90tiles 95tiles 99tiles 99.9tiles       Max
    [1] 32.000        32      32      32      32      32      32
    
    opened by suiguoxin 3
  • Windows =

    Windows = "Could not find TBB!"

    I installed the latest version of TBB through anaconda (https://anaconda.org/intel/tbb), but for some reason, every time I try to build SPTAG, it keeps stopping with "CMake Error at CMakeLists.txt:109 (message): Could not find TBB!"

    Does anyone know a work-around for this? In the case that this is due to a faulty TBB installation, does anyone have some insight on how to properly install it on their windows machine (Visual Studio 2019)? All the available TBB installation documentation that I found are very outdated.

    opened by robertdetroit 3
  • DLL load failed when import SPTAG

    DLL load failed when import SPTAG

    I have built the project successfully,but when I import SPTAG, I met this error: `

    import sys sys.path.append(r"D:\library\github_repositories\SPTAG-2\build\Release") import SPTAG Traceback (most recent call last): File "", line 1, in File "D:\library\github_repositories\SPTAG-2\build\Release\SPTAG.py", line 15, in import _SPTAG ImportError: DLL load failed: 找不到指定的模块。`

    The files in folder build/Release are:

    _SPTAG.exp _SPTAG.lib _SPTAG.pyd _SPTAGClient.exp _SPTAGClient.lib _SPTAGClient.pyd aggregator.exe client.exe indexbuilder.exe search.exe server.exe SPTAG.py SPTAGClient.py SPTAGLib.dll SPTAGLib.lib test.exe

    my environments are :

    vs2015 python3.6 windows7

    Could anyone tell me the solution for this error?

    opened by Arctanxy 3
  • pip install sptag. Wheels for Windows 10 fails to import _SPTAG

    pip install sptag. Wheels for Windows 10 fails to import _SPTAG

    Describe the bug

    @PhilipBAdams it seems like you are the only one working on this project. Basically cannot access SPTAG through the python package

    I think this has to do with python 3.8 and swig version that generates the interface needs to be updated. I would do it but I am not sure how... I would go about modifying the build system to use swig 4.0.1

    currently it uses swig 3.0. x

    https://docs.python.org/3/whatsnew/3.8.html#bpo-36085-whatsnew

    from sys import version_info
    if version_info >= (2, 7, 0):
        def swig_import_helper():
            import importlib
            pkg = __name__.rpartition('.')[0]
            mname = '.'.join((pkg, '_SPTAG')).lstrip('.') 
            return importlib.import_module(mname) <-- fails at this line here. 
        _SPTAG = swig_import_helper()
        del swig_import_helper
    elif version_info >= (2, 6, 0):
        def swig_import_helper():
            from os.path import dirname
            import imp
            fp = None
            try:
                fp, pathname, description = imp.find_module('_SPTAG', [dirname(__file__)])
            except ImportError:
                import _SPTAG
                return _SPTAG
            if fp is not None:
                try:
                    _mod = imp.load_module('_SPTAG', fp, pathname, description)
                finally:
                    fp.close()
                return _mod
        _SPTAG = swig_import_helper()
        del swig_import_helper
    else:
        import _SPTAG
    del version_info
    
    

    To Reproduce Steps to reproduce the behavior: Do a pip install sptag using python 3.8 or even build from source and import SPTAG from a test.py file.

    Expected behavior should import without dll not found error.

    Screenshots If applicable, add screenshots to help explain your problem.

    Desktop (please complete the following information): Windows 10

    opened by EmElleE 2
  • CMake Error while building the code

    CMake Error while building the code

    Please assist

    Hardware & Śoftware

    1. building the code on a google Colab notebook
    2. installed cmake-3.14.4-Linux-x86_64
    3. boost 1.67
    4. swig 3.0

    Used the below code

    
    !set GIT_LFS_SKIP_SMUDGE=1
    !git clone https://github.com/microsoft/SPTAG
    
    !sudo apt-get install swig
    !pip install cmake==3.14.4
    
    !wget "https://github.com/Kitware/CMake/releases/download/v3.14.4/cmake-3.14.4-Linux-x86_64.tar.gz" -q -O - \
            | tar -xz --strip-components=1 -C /usr/local
    
    !wget "https://netix.dl.sourceforge.net/project/boost/boost/1.67.0/boost_1_67_0.tar.gz" -q -O - \
            | tar -xz && \
            cd boost_1_67_0 && \
            ./bootstrap.sh && \
            ./b2 install && \
            ldconfig && \
            cd .. && rm -rf boost_1_67_0
    
    %cd /content/SPTAG-2
    !mkdir build && cd build && cmake .. && make && cd ..
    
    

    Below are the things recognised immediately after calling the above

    /content/SPTAG
    -- The C compiler identification is GNU 7.5.0
    -- The CXX compiler identification is GNU 7.5.0
    -- Check for working C compiler: /usr/bin/cc
    -- Check for working C compiler: /usr/bin/cc -- works
    -- Detecting C compiler ABI info
    -- Detecting C compiler ABI info - done
    -- Detecting C compile features
    -- Detecting C compile features - done
    -- Check for working CXX compiler: /usr/bin/c++
    -- Check for working CXX compiler: /usr/bin/c++ -- works
    -- Detecting CXX compiler ABI info
    -- Detecting CXX compiler ABI info - done
    -- Detecting CXX compile features
    -- Detecting CXX compile features - done
    -- Build type: Release
    -- Platform type: x64
    -- Found OpenMP_C: -fopenmp (found version "4.5") 
    -- Found OpenMP_CXX: -fopenmp (found version "4.5") 
    -- Found OpenMP: TRUE (found version "4.5")  
    -- Found openmp.
    -- Looking for pthread.h
    -- Looking for pthread.h - found
    -- Looking for pthread_create
    -- Looking for pthread_create - found
    -- Found Threads: TRUE  
    -- Boost version: 1.67.0
    -- Found the following Boost libraries:
    --   system
    --   thread
    --   serialization
    --   wserialization
    --   regex
    --   filesystem
    --   chrono
    --   date_time
    --   atomic
    -- Found Boost.
    -- Include Path: /usr/local/include
    -- Library Path: /usr/local/lib
    -- Library: /usr/local/lib/libboost_system.so;/usr/local/lib/libboost_thread.so;/usr/local/lib/libboost_serialization.so;/usr/local/lib/libboost_wserialization.so;/usr/local/lib/libboost_regex.so;/usr/local/lib/libboost_filesystem.so;/usr/local/lib/libboost_chrono.so;/usr/local/lib/libboost_date_time.so;/usr/local/lib/libboost_atomic.so
    -- Found MPI_C: /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so (found version "3.1") 
    -- Found MPI_CXX: /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so (found version "3.1") 
    -- Found MPI: TRUE (found version "3.1")  
    -- Found MPI.
    -- MPI Include Path: /usr/lib/x86_64-linux-gnu/openmpi/include/openmpi;/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi/opal/mca/event/libevent2022/libevent;/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi/opal/mca/event/libevent2022/libevent/include;/usr/lib/x86_64-linux-gnu/openmpi/include
    -- MPI Libraries: /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi_cxx.so;/usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so
    -- BOOST_TEST_DYN_LINK
    -- Boost version: 1.67.0
    -- Found the following Boost libraries:
    --   system
    --   thread
    --   serialization
    --   wserialization
    --   regex
    --   filesystem
    --   unit_test_framework
    --   chrono
    --   date_time
    --   atomic
    -- Found Boost.
    -- Include Path: /usr/local/include
    -- Library Path: /usr/local/lib
    -- Library: /usr/local/lib/libboost_system.so;/usr/local/lib/libboost_thread.so;/usr/local/lib/libboost_serialization.so;/usr/local/lib/libboost_wserialization.so;/usr/local/lib/libboost_regex.so;/usr/local/lib/libboost_filesystem.so;/usr/local/lib/libboost_unit_test_framework.so;/usr/local/lib/libboost_chrono.so;/usr/local/lib/libboost_date_time.so;/usr/local/lib/libboost_atomic.so
    -- Found CUDA: /usr/local/cuda (found version "11.1") 
    -- Found cuda.
    -- Include Path:/usr/local/cuda/include
    -- Library Path:/usr/local/cuda/lib64/libcudart_static.adl/usr/lib/x86_64-linux-gnu/librt.so
    -- CUDA_NVCC_FLAGS: -Xcompiler -fPIC -Xcompiler -fopenmp -std=c++14 -Xptxas -O3 --use_fast_math --disable-warnings
            -gencode arch=compute_70,code=sm_70
            -gencode arch=compute_61,code=sm_61
            -gencode arch=compute_60,code=sm_60
    -- Found Python2: /usr/lib/python2.7/config-x86_64-linux-gnu/libpython2.7.so (found version "2.7.17") found components:  Development 
    -- Found cuda.
    -- Found Python3: /usr/lib/python3.7/config-3.7m-x86_64-linux-gnu/libpython3.7m.so (found version "3.7.12") found components:  Development 
    -- Found Python.
    -- Include Path: /usr/include/python3.7m
    -- Library Path: /usr/lib/python3.7/config-3.7m-x86_64-linux-gnu/libpython3.7m.so
    -- Could NOT find JNI (missing: JAVA_AWT_INCLUDE_PATH) 
    -- Could not find JNI.
    -- Could not find C#.
    -- Configuring done
    -- Generating done
    -- Build files have been written to: /content/SPTAG/build
    

    It then begins building It usually build until 100% but this time stops at 92%

    Below is the error log

    Scanning dependencies of target gpussdserving
    [ 92%] Building CXX object GPUSupport/CMakeFiles/gpussdserving.dir/__/AnnService/src/SSDServing/main.cpp.o
    In file included from /content/SPTAG/AnnService/inc/Core/Common/cuda/Kmeans.hxx:29:0,
                     from /content/SPTAG/AnnService/inc/Core/SPANN/../Common/BKTree.h:183,
                     from /content/SPTAG/AnnService/inc/Core/SPANN/Index.h:13,
                     from /content/SPTAG/AnnService/src/SSDServing/main.cpp:8:
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Distance.hxx:42:22: error: ‘__host__’ does not name a type; did you mean ‘CUhostFn’?
     template<typename T> __host__ __device__ T INFTY() {}
                          ^~~~~~~~
                          CUhostFn
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Distance.hxx:43:12: error: ‘__forceinline__’ does not name a type
     template<> __forceinline__ __host__ __device__ int INFTY<int>() {return INT_MAX;}
                ^~~~~~~~~~~~~~~
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Distance.hxx:44:12: error: ‘__forceinline__’ does not name a type
     template<> __forceinline__ __host__ __device__ long long int INFTY<long long int>() {return LLONG_MAX;}
                ^~~~~~~~~~~~~~~
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Distance.hxx:45:12: error: ‘__forceinline__’ does not name a type
     template<> __forceinline__ __host__ __device__ float INFTY<float>() {return FLT_MAX;}
                ^~~~~~~~~~~~~~~
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Distance.hxx:47:12: error: ‘__forceinline__’ does not name a type
     template<> __forceinline__ __host__ __device__ uint8_t INFTY<uint8_t>() {return 255;}
                ^~~~~~~~~~~~~~~
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Distance.hxx:60:3: error: ‘__host__’ does not name a type; did you mean ‘CUhostFn’?
       __host__ void load(vector<T> data) {
       ^~~~~~~~
       CUhostFn
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Distance.hxx:66:3: error: ‘__host__’ does not name a type; did you mean ‘CUhostFn’?
       __host__ __device__ void loadChunk(T* data, int exact_dim) {
       ^~~~~~~~
       CUhostFn
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Distance.hxx:75:3: error: ‘__host__’ does not name a type; did you mean ‘CUhostFn’?
       __host__ __device__ Point& operator=( const Point& other ) {
       ^~~~~~~~
       CUhostFn
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Distance.hxx:84:3: error: ‘__device__’ does not name a type; did you mean ‘CUdevice’?
       __device__ __host__ SUMTYPE l2(Point<T,SUMTYPE,Dim>* other) {
       ^~~~~~~~~~
       CUdevice
    
    .
    .
    .           [SAme error repeats for a while]
     .
    .
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Kmeans.hxx:210:5: note: in expansion of macro ‘CUDA_CHECK’
         CUDA_CHECK(cudaFree(d_weightedCounts));
         ^~~~~~~~~~
    /content/SPTAG/AnnService/inc/Core/Common/cuda/params.h:44:15: error: ‘cudaSuccess’ was not declared in this scope
         if (rt != cudaSuccess) {                                                   \
                   ^
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Kmeans.hxx:210:5: note: in expansion of macro ‘CUDA_CHECK’
         CUDA_CHECK(cudaFree(d_weightedCounts));
         ^~~~~~~~~~
    /content/SPTAG/AnnService/inc/Core/Common/cuda/params.h:44:15: note: suggested alternative: ‘euidaccess’
         if (rt != cudaSuccess) {                                                   \
                   ^
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Kmeans.hxx:210:5: note: in expansion of macro ‘CUDA_CHECK’
         CUDA_CHECK(cudaFree(d_weightedCounts));
         ^~~~~~~~~~
    /content/SPTAG/AnnService/inc/Core/Common/cuda/params.h:43:5: error: ‘cudaError_t’ was not declared in this scope
         cudaError_t rt = (func);                                                   \
         ^
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Kmeans.hxx:211:5: note: in expansion of macro ‘CUDA_CHECK’
         CUDA_CHECK(cudaFree(d_newCenters));
         ^~~~~~~~~~
    /content/SPTAG/AnnService/inc/Core/Common/cuda/params.h:43:5: note: suggested alternative: ‘cudaError_enum’
         cudaError_t rt = (func);                                                   \
         ^
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Kmeans.hxx:211:5: note: in expansion of macro ‘CUDA_CHECK’
         CUDA_CHECK(cudaFree(d_newCenters));
         ^~~~~~~~~~
    /content/SPTAG/AnnService/inc/Core/Common/cuda/params.h:44:9: error: ‘rt’ was not declared in this scope
         if (rt != cudaSuccess) {                                                   \
             ^
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Kmeans.hxx:211:5: note: in expansion of macro ‘CUDA_CHECK’
         CUDA_CHECK(cudaFree(d_newCenters));
         ^~~~~~~~~~
    /content/SPTAG/AnnService/inc/Core/Common/cuda/params.h:44:15: error: ‘cudaSuccess’ was not declared in this scope
         if (rt != cudaSuccess) {                                                   \
                   ^
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Kmeans.hxx:211:5: note: in expansion of macro ‘CUDA_CHECK’
         CUDA_CHECK(cudaFree(d_newCenters));
         ^~~~~~~~~~
    /content/SPTAG/AnnService/inc/Core/Common/cuda/params.h:44:15: note: suggested alternative: ‘euidaccess’
         if (rt != cudaSuccess) {                                                   \
                   ^
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Kmeans.hxx:211:5: note: in expansion of macro ‘CUDA_CHECK’
         CUDA_CHECK(cudaFree(d_newCenters));
         ^~~~~~~~~~
    /content/SPTAG/AnnService/inc/Core/Common/cuda/params.h:43:5: error: ‘cudaError_t’ was not declared in this scope
         cudaError_t rt = (func);                                                   \
         ^
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Kmeans.hxx:212:5: note: in expansion of macro ‘CUDA_CHECK’
         CUDA_CHECK(cudaFree(d_currDist));
         ^~~~~~~~~~
    /content/SPTAG/AnnService/inc/Core/Common/cuda/params.h:43:5: note: suggested alternative: ‘cudaError_enum’
         cudaError_t rt = (func);                                                   \
         ^
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Kmeans.hxx:212:5: note: in expansion of macro ‘CUDA_CHECK’
         CUDA_CHECK(cudaFree(d_currDist));
         ^~~~~~~~~~
    /content/SPTAG/AnnService/inc/Core/Common/cuda/params.h:44:9: error: ‘rt’ was not declared in this scope
         if (rt != cudaSuccess) {                                                   \
             ^
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Kmeans.hxx:212:5: note: in expansion of macro ‘CUDA_CHECK’
         CUDA_CHECK(cudaFree(d_currDist));
         ^~~~~~~~~~
    /content/SPTAG/AnnService/inc/Core/Common/cuda/params.h:44:15: error: ‘cudaSuccess’ was not declared in this scope
         if (rt != cudaSuccess) {                                                   \
                   ^
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Kmeans.hxx:212:5: note: in expansion of macro ‘CUDA_CHECK’
         CUDA_CHECK(cudaFree(d_currDist));
         ^~~~~~~~~~
    /content/SPTAG/AnnService/inc/Core/Common/cuda/params.h:44:15: note: suggested alternative: ‘euidaccess’
         if (rt != cudaSuccess) {                                                   \
                   ^
    /content/SPTAG/AnnService/inc/Core/Common/cuda/Kmeans.hxx:212:5: note: in expansion of macro ‘CUDA_CHECK’
         CUDA_CHECK(cudaFree(d_currDist));
         ^~~~~~~~~~
    In file included from /content/SPTAG/AnnService/inc/Core/SPANN/Index.h:13:0,
                     from /content/SPTAG/AnnService/src/SSDServing/main.cpp:8:
    /content/SPTAG/AnnService/inc/Core/SPANN/../Common/BKTree.h: In function ‘float SPTAG::COMMON::KmeansAssign(const SPTAG::COMMON::Dataset<T>&, std::vector<int>&, SPTAG::SizeType, SPTAG::SizeType, SPTAG::COMMON::KmeansArgs<T>&, bool, float)’:
    /content/SPTAG/AnnService/inc/Core/SPANN/../Common/BKTree.h:197:9: warning: no return statement in function returning non-void [-Wreturn-type]
             }
             ^
    /content/SPTAG/AnnService/inc/Core/SPANN/../Common/BKTree.h: In function ‘float SPTAG::COMMON::InitCenters(const SPTAG::COMMON::Dataset<T>&, std::vector<int>&, SPTAG::SizeType, SPTAG::SizeType, SPTAG::COMMON::KmeansArgs<T>&, int, int)’:
    /content/SPTAG/AnnService/inc/Core/SPANN/../Common/BKTree.h:292:58: error: call of overloaded ‘min(int, const SizeType&)’ is ambiguous
                 SizeType batchEnd = min(first + samples, last);
                                                              ^
    In file included from /content/SPTAG/AnnService/src/SSDServing/main.cpp:6:0:
    /content/SPTAG/AnnService/inc/Core/Common.h:44:10: note: candidate: T min(T, T) [with T = int]
     inline T min(T a, T b) {
              ^~~
    In file included from /usr/include/c++/7/bits/char_traits.h:39:0,
                     from /usr/include/c++/7/ios:40,
                     from /usr/include/c++/7/ostream:38,
                     from /usr/include/c++/7/iostream:39,
                     from /content/SPTAG/AnnService/src/SSDServing/main.cpp:4:
    /usr/include/c++/7/bits/stl_algobase.h:195:5: note: candidate: constexpr const _Tp& std::min(const _Tp&, const _Tp&) [with _Tp = int]
         min(const _Tp& __a, const _Tp& __b)
         ^~~
    In file included from /content/SPTAG/AnnService/inc/Core/SPANN/Index.h:13:0,
                     from /content/SPTAG/AnnService/src/SSDServing/main.cpp:8:
    /content/SPTAG/AnnService/inc/Core/SPANN/../Common/BKTree.h: In function ‘float SPTAG::COMMON::TryClustering(const SPTAG::COMMON::Dataset<T>&, std::vector<int>&, SPTAG::SizeType, SPTAG::SizeType, SPTAG::COMMON::KmeansArgs<T>&, int, float, bool, SPTAG::IAbortOperation*)’:
    /content/SPTAG/AnnService/inc/Core/SPANN/../Common/BKTree.h:321:58: error: call of overloaded ‘min(int, const SizeType&)’ is ambiguous
                 SizeType batchEnd = min(first + samples, last);
                                                              ^
    In file included from /content/SPTAG/AnnService/src/SSDServing/main.cpp:6:0:
    /content/SPTAG/AnnService/inc/Core/Common.h:44:10: note: candidate: T min(T, T) [with T = int]
     inline T min(T a, T b) {
              ^~~
    In file included from /usr/include/c++/7/bits/char_traits.h:39:0,
                     from /usr/include/c++/7/ios:40,
                     from /usr/include/c++/7/ostream:38,
                     from /usr/include/c++/7/iostream:39,
                     from /content/SPTAG/AnnService/src/SSDServing/main.cpp:4:
    /usr/include/c++/7/bits/stl_algobase.h:195:5: note: candidate: constexpr const _Tp& std::min(const _Tp&, const _Tp&) [with _Tp = int]
         min(const _Tp& __a, const _Tp& __b)
         ^~~
    In file included from /content/SPTAG/AnnService/inc/Core/SPANN/Index.h:13:0,
                     from /content/SPTAG/AnnService/src/SSDServing/main.cpp:8:
    /content/SPTAG/AnnService/inc/Core/SPANN/../Common/BKTree.h:332:119: error: call of overloaded ‘min(float&, float&)’ is ambiguous
                     currDist = KmeansAssign(data, indices, first, batchEnd, args, true, min(adjustedLambda, originalLambda));
                                                                                                                           ^
    In file included from /content/SPTAG/AnnService/src/SSDServing/main.cpp:6:0:
    /content/SPTAG/AnnService/inc/Core/Common.h:44:10: note: candidate: T min(T, T) [with T = float]
     inline T min(T a, T b) {
              ^~~
    In file included from /usr/include/c++/7/bits/char_traits.h:39:0,
                     from /usr/include/c++/7/ios:40,
                     from /usr/include/c++/7/ostream:38,
                     from /usr/include/c++/7/iostream:39,
                     from /content/SPTAG/AnnService/src/SSDServing/main.cpp:4:
    /usr/include/c++/7/bits/stl_algobase.h:195:5: note: candidate: constexpr const _Tp& std::min(const _Tp&, const _Tp&) [with _Tp = float]
         min(const _Tp& __a, const _Tp& __b)
         ^~~
    GPUSupport/CMakeFiles/gpussdserving.dir/build.make:62: recipe for target 'GPUSupport/CMakeFiles/gpussdserving.dir/__/AnnService/src/SSDServing/main.cpp.o' failed
    make[2]: *** [GPUSupport/CMakeFiles/gpussdserving.dir/__/AnnService/src/SSDServing/main.cpp.o] Error 1
    CMakeFiles/Makefile2:720: recipe for target 'GPUSupport/CMakeFiles/gpussdserving.dir/all' failed
    make[1]: *** [GPUSupport/CMakeFiles/gpussdserving.dir/all] Error 2
    Makefile:129: recipe for target 'all' failed
    make: *** [all] Error 2
    
    
    opened by xXSnehalXx 2
  • ImportError: dynamic module does not define init function (init_SPTAG)

    ImportError: dynamic module does not define init function (init_SPTAG)

    When I run the official test python code, I get an error: Traceback (most recent call last): File "testSPTAG.py", line 4, in import SPTAG File "SPTAG/Release/SPTAG.py", line 17, in _SPTAG = swig_import_helper() File "SPTAG/Release/SPTAG.py", line 16, in swig_import_helper return importlib.import_module('_SPTAG') File "/usr/lib/python2.7/importlib/ __ init __.py", line 37, in import_module __import __(name) ImportError: dynamic module does not define init function (init_SPTAG)

    opened by MrHwc 2
  • Error when make.

    Error when make.

    thanks for this great project, I like this project so much! When I run cd build && cmake .. && make, get an error at 86%. SPTAG_err As the error messge said: this is caused by the SPTAG.py missed. And I found SPTAG.py is in .gitignore I dont know how to solve this, do you have any ideas? thank you so much.

    opened by AmigoCDT 2
  • Illegal instruction (core dumped) in kubernetes pod

    Illegal instruction (core dumped) in kubernetes pod

    Describe the bug The kubernetes pod crashes when a tree is loaded

    To Reproduce Steps to reproduce the behavior:

    1. Build a tree on any machine
    2. save the tree to folder
    3. make a kubernetes deploy which builds the tree inside the image

    Expected behavior It should load the tree without problem

    opened by MohamedAliRashad 0
  • Bump pillow from 9.0.1 to 9.3.0 in /docs/examples

    Bump pillow from 9.0.1 to 9.3.0 in /docs/examples

    Bumps pillow from 9.0.1 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • The maxcheck parameter has no effect when searching.

    The maxcheck parameter has no effect when searching.

    I ran the code following the example, but while searching I found that the MaxCheck parameter doesn't adjust the recall as described.

    [1] [query] [maxcheck] [avg] [99%] [95%] [recall] [qps] [mem] [1] 0-10000 16384 0.0028 0.0090 0.0064 0.8103 2886.7361 0GB [1] 0-10000 8192 0.0024 0.0086 0.0056 0.8103 3368.8591 0GB [1] 0-10000 4096 0.0015 0.0058 0.0033 0.8103 5320.0259 0GB [1] 0-10000 2048 0.0015 0.0060 0.0035 0.8103 5267.8604 0GB [1] 0-10000 1024 0.0016 0.0055 0.0036 0.8103 5104.4990 0GB [1] 0-10000 512 0.0014 0.0050 0.0032 0.8103 5527.0239 0GB [1] 0-10000 256 0.0016 0.0054 0.0037 0.8103 4964.4180 0GB

    opened by LLLjun 1
  • Bug when debugging SSDServing in visual studio

    Bug when debugging SSDServing in visual studio

    Describe the bug Hi I am trying to set some break point and run through the code SSDServing code. However, I got this bug (see screenshots below). The bug will disappear by simply remove this line. https://github.com/microsoft/SPTAG/blob/55ca655c6613b54db9d2aff3494a6ec2286db09d/AnnService/src/SSDServing/main.cpp#L172

    my questions are :

    1. Why my build runs without any issue but debug fail? 2.This line has been defined twice, does anyone have clue why?

    To Reproduce Steps to reproduce the behavior:

    1. set up SSDServing as start up project
    2. Debug
    3. Start debugging.

    Expected behavior goes into main function

    Screenshots image

    Additional context no_

    opened by jinwei14 1
  • https://www.ailab.microsoft.com doesn't work

    https://www.ailab.microsoft.com doesn't work

    Sorry to report it here, but since I got to this site from https://vectorsearch.azurewebsites.net/search/ae192dfa-16ed-45ca-ba8b-22b073e8e594?category=first_tab that also points to https://www.ailab.microsoft.com , hopefully someone here knows who is the best contact.

    When browsing to https://www.ailab.microsoft.com the site returns a security error:

    image

    opened by dluc 0
Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
Personal implementation of paper "Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval"

Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval This repo provides personal implementation of paper Approximate Ne

John 8 Oct 7, 2022
Simple and Effective Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning

structshot Code and data for paper "Simple and Effective Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning", Yi Yang and Arz

ASAPP Research 47 Dec 27, 2022
Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)

Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)

Junxian He 57 Jan 1, 2023
K-Nearest Neighbor in Pytorch

Pytorch KNN CUDA 2019/11/02 This repository will no longer be maintained as pytorch supports sort() and kthvalue on tensors. git clone https://github.

Chris Choy 65 Dec 1, 2022
Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk

Annoy Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given quer

Spotify 10.6k Jan 4, 2023
GPU implementation of $k$-Nearest Neighbors and Shared-Nearest Neighbors

GPU implementation of kNN and SNN GPU implementation of $k$-Nearest Neighbors and Shared-Nearest Neighbors Supported by numba cuda and faiss library E

Hyeon Jeon 7 Nov 23, 2022
Fastshap: A fast, approximate shap kernel

fastshap: A fast, approximate shap kernel fastshap was designed to be: Fast Calculating shap values can take an extremely long time. fastshap utilizes

Samuel Wilson 22 Sep 24, 2022
Deep Image Search is an AI-based image search engine that includes deep transfor learning features Extraction and tree-based vectorized search.

Deep Image Search - AI-Based Image Search Engine Deep Image Search is an AI-based image search engine that includes deep transfer learning features Ex

null 139 Jan 1, 2023
Code for Subgraph Federated Learning with Missing Neighbor Generation (NeurIPS 2021)

To run the code Unzip the package to your local directory; Run 'pip install -r requirements.txt' to download required packages; Open file ~/nips_code/

null 32 Dec 26, 2022
A gesture recognition system powered by OpenPose, k-nearest neighbours, and local outlier factor.

OpenHands OpenHands is a gesture recognition system powered by OpenPose, k-nearest neighbours, and local outlier factor. Currently the system can iden

Paul Treanor 12 Jan 10, 2022
Weighted K Nearest Neighbors (kNN) algorithm implemented on python from scratch.

kNN_From_Scratch I implemented the k nearest neighbors (kNN) classification algorithm on python. This algorithm is used to predict the classes of new

null 1 Dec 14, 2021
Convert Table data to approximate values with GUI

Table_Editor Convert Table data to approximate values with GUIs... usage - Import methods for extension Tables. Imported method supposed to have only

CLJ 1 Jan 10, 2022
Model search is a framework that implements AutoML algorithms for model architecture search at scale

Model search (MS) is a framework that implements AutoML algorithms for model architecture search at scale. It aims to help researchers speed up their exploration process for finding the right model architecture for their classification problems (i.e., DNNs with different types of layers).

Google 3.2k Dec 31, 2022
This repo contains the code and data used in the paper "Wizard of Search Engine: Access to Information Through Conversations with Search Engines"

Wizard of Search Engine: Access to Information Through Conversations with Search Engines by Pengjie Ren, Zhongkun Liu, Xiaomeng Song, Hongtao Tian, Zh

null 19 Oct 27, 2022
Deep Text Search is an AI-powered multilingual text search and recommendation engine with state-of-the-art transformer-based multilingual text embedding (50+ languages).

Deep Text Search - AI Based Text Search & Recommendation System Deep Text Search is an AI-powered multilingual text search and recommendation engine w

null 19 Sep 29, 2022
Densely Connected Search Space for More Flexible Neural Architecture Search (CVPR2020)

DenseNAS The code of the CVPR2020 paper Densely Connected Search Space for More Flexible Neural Architecture Search. Neural architecture search (NAS)

Jamin Fong 291 Nov 18, 2022
A fast, dataset-agnostic, deep visual search engine for digital art history

imgs.ai imgs.ai is a fast, dataset-agnostic, deep visual search engine for digital art history based on neural network embeddings. It utilizes modern

Fabian Offert 5 Dec 14, 2022
Fast image augmentation library and easy to use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

null 11.4k Jan 9, 2023