Runtime type annotations for the shape, dtype etc. of PyTorch Tensors.

Overview

torchtyping

Type annotations for a tensor's shape, dtype, names, ...

Turn this:

def batch_outer_product(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
    # x has shape (batch, x_channels)
    # y has shape (batch, y_channels)
    # return has shape (batch, x_channels, y_channels)

    return x.unsqueeze(-1) * y.unsqueeze(-2)

into this:

def batch_outer_product(x:   TensorType["batch", "x_channels"],
                        y:   TensorType["batch", "y_channels"]
                        ) -> TensorType["batch", "x_channels", "y_channels"]:

    return x.unsqueeze(-1) * y.unsqueeze(-2)

with programmatic checking that the shape (dtype, ...) specification is met.

Bye-bye bugs! Say hello to enforced, clear documentation of your code.

If (like me) you find yourself littering your code with comments like # x has shape (batch, hidden_state) or statements like assert x.shape == y.shape , just to keep track of what shape everything is, then this is for you.


Installation

pip install torchtyping

Requires Python 3.7+ and PyTorch 1.7.0+.

Usage

torchtyping allows for type annotating:

  • shape: size, number of dimensions;
  • dtype (float, integer, etc.);
  • layout (dense, sparse);
  • names of dimensions as per named tensors;
  • arbitrary number of batch dimensions with ...;
  • ...plus anything else you like, as torchtyping is highly extensible.

If typeguard is (optionally) installed then at runtime the types can be checked to ensure that the tensors really are of the advertised shape, dtype, etc.

# EXAMPLE

from torch import rand
from torchtyping import TensorType, patch_typeguard
from typeguard import typechecked

patch_typeguard()  # use before @typechecked

@typechecked
def func(x: TensorType["batch"],
         y: TensorType["batch"]) -> TensorType["batch"]:
    return x + y

func(rand(3), rand(3))  # works
func(rand(3), rand(1))
# TypeError: Dimension 'batch' of inconsistent size. Got both 1 and 3.

typeguard also has an import hook that can be used to automatically test an entire module, without needing to manually add @typeguard.typechecked decorators.

If you're not using typeguard then torchtyping.patch_typeguard() can be omitted altogether, and torchtyping just used for documentation purposes. If you're not already using typeguard for your regular Python programming, then strongly consider using it. It's a great way to squash bugs. Both typeguard and torchtyping also integrate with pytest, so if you're concerned about any performance penalty then they can be enabled during tests only.

API

torchtyping.TensorType[shape, dtype, layout, details]

The core of the library.

Each of shape, dtype, layout, details are optional.

  • The shape argument can be any of:
    • An int: the dimension must be of exactly this size. If it is -1 then any size is allowed.
    • A str: the size of the dimension passed at runtime will be bound to this name, and all tensors checked that the sizes are consistent.
    • A ...: An arbitrary number of dimensions of any sizes.
    • A str: int pair (technically it's a slice), combining both str and int behaviour. (Just a str on its own is equivalent to str: -1.)
    • A str: ... pair, in which case the multiple dimensions corresponding to ... will be bound to the name specified by str, and again checked for consistency between arguments.
    • None, which when used in conjunction with is_named below, indicates a dimension that must not have a name in the sense of named tensors.
    • A None: int pair, combining both None and int behaviour. (Just a None on its own is equivalent to None: -1.)
    • A typing.Any: Any size is allowed for this dimension (equivalent to -1).
    • Any tuple of the above. For example.TensorType["batch": ..., "length": 10, "channels", -1]. If you just want to specify the number of dimensions then use for example TensorType[-1, -1, -1] for a three-dimensional tensor.
  • The dtype argument can be any of:
    • torch.float32, torch.float64 etc.
    • int, bool, float, which are converted to their corresponding PyTorch types. float is specifically interpreted as torch.get_default_dtype(), which is usually float32.
  • The layout argument can be either torch.strided or torch.sparse_coo, for dense and sparse tensors respectively.
  • The details argument offers a way to pass an arbitrary number of additional flags that customise and extend torchtyping. Two flags are built-in by default. torchtyping.is_named causes the names of tensor dimensions to be checked, and torchtyping.is_float can be used to check that arbitrary floating point types are passed in. (Rather than just a specific one as with e.g. TensorType[torch.float32].) For discussion on how to customise torchtyping with your own details, see the further documentation.

Check multiple things at once by just putting them all together inside a single []. For example TensorType["batch": ..., "length", "channels", float, is_named].

torchtyping.patch_typeguard()

torchtyping integrates with typeguard to perform runtime type checking. torchtyping.patch_typeguard() should be called at the global level, and will patch typeguard to check TensorTypes.

This function is safe to run multiple times. (It does nothing after the first run).

  • If using @typeguard.typechecked, then torchtyping.patch_typeguard() should be called any time before using @typeguard.typechecked. For example you could call it at the start of each file using torchtyping.
  • If using typeguard.importhook.install_import_hook, then torchtyping.patch_typeguard() should be called any time before defining the functions you want checked. For example you could call torchtyping.patch_typeguard() just once, at the same time as the typeguard import hook. (The order of the hook and the patch doesn't matter.)
  • If you're not using typeguard then torchtyping.patch_typeguard() can be omitted altogether, and torchtyping just used for documentation purposes.
pytest --torchtyping-patch-typeguard

torchtyping offers a pytest plugin to automatically run torchtyping.patch_typeguard() before your tests. pytest will automatically discover the plugin, you just need to pass the --torchtyping-patch-typeguard flag to enable it. Packages can then be passed to typeguard as normal, either by using @typeguard.typechecked, typeguard's import hook, or the pytest flag --typeguard-packages="your_package_here".

Further documentation

See the further documentation for:

  • FAQ;
    • Including flake8 and mypy compatibility;
  • How to write custom extensions to torchtyping;
  • Resources and links to other libraries and materials on this topic;
  • More examples.
Comments
  • vscode/pylance/pyright don't consider a Tensor to be compatible with TensorType

    vscode/pylance/pyright don't consider a Tensor to be compatible with TensorType

    Using torchtyping in vscode, I've found that passing a Tensor to a TorchType generates an error in the type checker:

    image

    Tagging the TensorType import with type: ignore as recommended in the FAQ for mypy compatibility doesn't help. Is there any other way to suppress these errors short of tagging every use of a tensor with a tensortype'd sig with type: ignore?

    Reproduction

    vscode's Pylance language server backs onto the pyright project, and so we can get an easier to examine reproduction by using pyright directly.

    Here's a quick script to set up an empty conda env with just torch and torchtyping

    mkdir tmp
    cd tmp
    conda create -p ./.env 
    conda activate ./.env
    pip install torch==1.9.0 torchtyping==0.1.3
    

    and one more command to install pyright

    sudo npm install -g pyright
    

    Then create two files, pyrightconfig.json with contents

    {
        "useLibraryCodeForTypes": true,
        "exclude": [".env"]
    }
    

    and test.py with contents

    import torch
    from torchtyping import TensorType
    
    def f(a: TensorType):
        pass
    
    f(torch.zeros())
    

    With that all done, running pyright test.py will give the error:

    Loading configuration file at /Users/andy/code/tmp/pyrightconfig.json
    Assuming Python version 3.9
    Assuming Python platform Darwin
    stubPath /Users/andy/code/tmp/typings is not a valid directory.
    Searching for source files
    Found 1 source file
    /Users/andy/code/tmp/test.py
      /Users/andy/code/tmp/test.py:7:3 - error: Argument of type "Tensor" cannot be assigned to parameter "a" of type "TensorType" in function "f"
        "Tensor" is incompatible with "TensorType" (reportGeneralTypeIssues)
    1 error, 0 warnings, 0 infos 
    Completed in 0.715sec
    
    opened by andyljones 7
  • allow named sizes for named dimensions

    allow named sizes for named dimensions

    (Thanks for this awesome work!)

    I've often seen applications in which multiple dimensions of the same tensor will be the same size (which size is only known at runtime). It is useful to be able to separately document that those sizes must be the same while acknowledging the differing purposes of these dimensions.

    The following example demonstrates the idea---showing three related features that already work in torchtyping as well as a proposed feature:

    def func(feats: TensorType["b": ..., "annotator": 3, "word": "words", "feature"],
             predicates: TensorType["b": ..., "annotator": 3, "predicate": "words", "feature"],
             pred_arg_pairs: TensorType["b": ..., "annotator": 3, "predicate": "words", "argument": "words"]):
        # feats has shape (..., 3, words, features)
        # predicates has shape (..., 3, words, features)
        # pred_arg_pairs has shape (..., 3, words, words)
        # the ... b dimensions are checked to be of the same size.
    

    Things that already work:

    • dimensions that share names (like "feature") are enforced to share the same dimension size
    • named dimensions (like "annotator") can specify a specific dimension size which is enforced
    • named ellipses (like "batch") can be used to represent a fixed (but only known at runtime) set of dimensions and corresponding sizes [this is very close to what I want, but (1) I would prefer to not have the extra power of matching an unspecified number of dimensions and (2) as I understand it, ellipses only represent a single variable set of dimensions, but I want to be able to separately constrain multiple sets of dimensions to share respective sizes]

    Proposed:

    • named dimensions (like "token", "predicate", and "argument") should be able to declare a shared-but-unspecified dimension size given by name ("words" in this example)

    Additionally, you would probably want to enforce that, if the specified "size name" matches the name of another dimension (like "word" in the following example), then the sizes of those dimensions should be the same:

    def func(feats: TensorType["b": ..., "annotator": 3, "word", "feature"],
             predicates: TensorType["b": ..., "annotator": 3, "predicate": "word", "feature"],
             pred_arg_pairs: TensorType["b": ..., "annotator": 3, "predicate": "word", "argument": "word"]):
        # feats has shape (..., 3, words, features)
        # predicates has shape (..., 3, words, features)
        # pred_arg_pairs has shape (..., 3, words, words)
        # the ... b dimensions are checked to be of the same size.
    

    Thoughts?

    opened by teichert 6
  • Add Any type support for shape

    Add Any type support for shape

    We can specify the arbitrary shape of a dimension using the Any type. Example: any_tensor = TensorType[3, 3, Any, torch.float32]

    I do not include the tests, to be honest, I am a bit lost on all the tests that you want. Could you clarify them for me @patrick-kidger ?

    So that I can do them in the next commit.

    Edit: I guess a test/test_any.py and a function test_any_dim in test/test_shape.py

    opened by AdilZouitine 5
  • how does pep-646 affect this project?

    how does pep-646 affect this project?

    hi! how does the inclusion of variadic generics (pep-646) in python 3.11 affect this project?

    will this allow mypy to statically check tensor shapes? maybe a plugin will be required?

    just curious! cheers

    opened by gchaperon 4
  • Python 3.7+ compat

    Python 3.7+ compat

    Hi again :smile: ,

    You will find in this pull request some modifications in the code and tests to make them compatible with python 3.7 +. My code is available at the code review.

    However, versions 3.7 and 3.8 in the CI are missing (I tested in local on python 3.7 and 3.9).

    opened by AdilZouitine 4
  • Support Any for shape

    Support Any for shape

    Hi, I would like to thank you for this cool library. I was desperate not to find a shape typing for pytorch and I had planned to code it myself if it didn't exist.

    I think your api is great, however I find that specifying the dimension of any shape to -1 is not very intuitive (I saw that you have many other ways to declare it). One idea is to declare a dimension with any shape using typing.Any. As the library nbtyping does :

    from typing import Any
    import numpy as np 
    from nptyping import NDArray
    NDArray[(3, 3, Any), np.float32]
    

    In this case we have typed our array with no constraints on the last dimension. If we apply this modification to your library:

    from typing import Any
    from torchtyping import TensorType
    import torch
    
    TensorType[3, 3, Any, torch.float32]
    # Instead of 
    TensorType[3, 3, -1, torch.float32]
    

    What do you think of this? If you're interested I can try to make a pull request!

    I thank you again for developing this wonderful library.

    opened by AdilZouitine 4
  • Basic mypy integration tests don't work?

    Basic mypy integration tests don't work?

    Since the documentation suggest that mypy integration mostly works, I would have expected the following basic things to work.

    import torch
    from torchtyping import TensorType  # type: ignore
    
    
    # I'd expect that mixing TensorType with torch.Tensor works. However in
    # strict mode this errors with:
    # error: Returning Any from function declared to return "Tensor"
    def simple_test_a(x: TensorType[(), float]) -> torch.Tensor:
        return x
    
    
    # I'd expect that .item() has the type in the annotation, and that e.g.
    # annotation the function with `-> str` would be a type error. However
    # in strict mode this errors with:
    # error: Returning Any from function declared to return "float"
    def simple_test_b(x: TensorType[(), float]) -> float:
        return x.item()
    
    
    # I'd expect that mypy is a aware of actual methods of a tensor and
    # gives a type check error when calling garbage.
    def simple_test_c(x: TensorType[(), float]) -> None:
        x.asdfasdfasdf()
    

    Note that the latter two work when using the native torch.Tensor as a type annotation.

    From what I can tell so far is that mypy actually doesn't understand the meaning of TensorType at all. I thought that "mostly works" means that I can expect mypy to produce meaningful type check errors based on the annotations (operations with illegal shape or dtype combinations etc.).

    Am I doing something wrong here, or does "mostly works" that it simply completely ignores the type annotations? Kind of unexpected considering the primary motivation of type annotations is to be used in type checking.

    opened by bluenote10 3
  • Type checking based on names

    Type checking based on names

    Hello

    I'd like to know if there's an easy way to check tensors by name:

    import torch
    from torch import rand
    from torchtyping import TensorType, patch_typeguard, is_named
    from typeguard import typechecked
    
    patch_typeguard()  # use before @typechecked
    
    
    def test():
        t = Test()
        b = t.return_batch()
        o = t.return_other()
        v = t.func(b, o)
        u = t.func(o, b)  # can we have it raise TypeError?
    
    
    class Test:
        def __init__(self):
            pass
    
        @typechecked
        def func(
            self,
            x: TensorType["batch"],
            y: TensorType["other"],
        ) -> TensorType["batch", "other"]:
            return torch.outer(x, y)
    
        def return_batch(self) -> TensorType["batch"]:
            return rand(4)
    
        def return_other(self) -> TensorType["other"]:
            return rand(3)
    
    
    test()
    

    Right now, IIUC only dimensions are checked, so in this example there is no error...

    I think that I could use is_named in TensorType, but it gets very cumbersome because we also need to use names=... everytime we declare a tensor. This could be OK... but it can get even worse because some pytorch operations don't seem to work with named tensors (outer here! at least with 1.9.1) so we need to rename tensors every 2 lines...

    Is is doable to have patch_typeguard(name_check=True), or would it be too complicated to implement? (I think basically I want nominal typing instead of structural typing)

    Thanks for your work!

    opened by tombosc 3
  • pycharm warning of Unresolved reference

    pycharm warning of Unresolved reference

    Hi, I really like this project! It's mind blowing for me!!

    I have a question. When coding in pycharm using TensorType with a string dimension name, I got a warning about unresolved reference

    image

    I am not sure if this is a limitation of pycharm. I am using pycharm 2021.1.1, python 3.7.10 and torchtyping 0.1.4.

    Thanks!

    opened by chris-tkinter 3
  • Is it possible to use torchtyping with pydantic?

    Is it possible to use torchtyping with pydantic?

    Hi! Is it an option to use the torchtyping annotations with other runtime type checkers instead of typeguard?

    It is not directly related, but the question came up from the next case. For batch I use dataclass, and typeguard doesn't check in runtime constructor of dataclass. Maybe there is a more clever choice for from what inherit batch?

    opened by neer201 2
  • Arbitrary number of dimensions - but check they are same over the argument tensors

    Arbitrary number of dimensions - but check they are same over the argument tensors

    Consider this function

    @typechecked
    def mean_squared_error(input: TensorType["batch"], target: TensorType["batch"]):
        d = input - target
        d = d * d
        return torch.mean(d)
    

    The above only allows batches to contain 1-element values (i.e. scalars).

    I would like to ensure that the shape of items in the input batch is the same as the shape of items in the target batch, i.e input.shape[1:] == target.shape[1:].

    I don't want to hardcode the number of dimensions like for example a batch containing images: input: TensorType["batch", "c", "h", "w"].

    Is this currently possible?

    opened by rksht 2
  • Tensor duck

    Tensor duck

    As promised, here is the PR to upgrade the library to define a 'torch-like' protocol and use that for the base type rather than using torch.Tensor directly. This lets users perform dimension checking on classes that support a Tensor interface but do not directly inherit from torch.Tensor. I think the change is fairly clear-cut, I have added a test case to demonstrate and verify that dimensions are actually checked. The only question I have is about the change to line 304 in typechecker.py (the last change below). Is this test really necessary? I had to change it to use default construction because protocols don't support isinstance if they have properties.

    opened by corwinjoy 3
  • Generalizing the Library

    Generalizing the Library

    Hello and thanks for publishing this library! I've really enjoyed reading the design and discussion documents you have posted. However, I am now trying to apply this library in a somewhat broader context. Essentially, I am hoping to use it to improve the linear operator library. linear_operator. The idea of this library is to abstract how tensors are stored to be able to perform matrix operations much more efficiently. I'd really like to use torchtyping to add dimensional and storage type checks to help squash bugs in this code. Unfortunately, torchtyping is configured to run exactly on torch.Tensor objects. My first attempt was just to hack the library to pull out a few class checks. But, doing more reading, I feel like torchtyping could be cleanly improved by using protocols. PEP 544 – Protocols: Structural subtyping (static duck typing). The idea would be to have the library use an abstract tensor protocol rather than tensor directly. This would make the library much more general and I think it could help cleanup the code by making it explicit as to what tensor fields are being used. What do you think / do you have any suggestions on how to add this? @dannyfriar @m4rs-mt

    opened by corwinjoy 3
  • Pyright reports an error with named axis

    Pyright reports an error with named axis

    Setup

    • pyright: 1.1.263
    • pytorch: 1.12.0+cu113
    • torchtyping: 0.1.4

    Code Example

    from torchtyping import TensorType
    
    def example(foo: TensorType["batch"]):
        pass
    

    Problem

    Pyright reports the following error: "batch" is not defined

    Related issue

    The same error is reported by mypy when -1 is omitted: https://github.com/patrick-kidger/torchtyping/issues/35

    opened by Luceurre 2
  • mypy not compatible with any named axes?

    mypy not compatible with any named axes?

    When I specify a type like TensorType["batch_size", "num_channels", "x", "y"], I get a mypy error like error: Name "batch_size" is not defined for each of the named axes. Is this expected? Am I doing something wrong? This is with the most recent mypy, 0.950.

    opened by zplizzi 4
  • Support checks via docstrings instead of type annotations

    Support checks via docstrings instead of type annotations

    First off - I've been thinking of almost the same idea as this library for a while because I see runtime errors from tensor shape/dtype mismatches all the time, so glad that there's already something in place!

    My initial approach was going to be parsing docstrings of various formats (with an existing library like docstring_parser) and performing validation on these, rather than type annotations. Is that a feature you'd consider accepting into your library? I'd be interested in writing a PR for it with some guidance

    For example, I currently write Sphinx style docstrings like this

    def forward(self, imgs, tokens):
        """Combine multimodal input features
    
        :param torch.FloatTensor[N, C, H, W] imgs: Batch of image pixels
            normalized in range of 0-1
        :param torch.LongTensor[N, L] tokens: Vocabularly tokens in sequence
            of length ``L``
        :return torch.FloatTensor[N, C] pred: Predicted probabilities for
            each class
        """
    

    I realize the [N, C, H, W] notation is not quite as rigid as what this project proposes, and that's one reason I've been looking for a more structured approach. But regardless, I do find it nice sometimes to have this information in the docstrings instead of type annotations, particularly for functions with many parameters

    opened by addisonklinke 2
  • Support for an or condition, or other way to accomplish this pattern?

    Support for an or condition, or other way to accomplish this pattern?

    n00b to this very cool project, looking to enforce a broadcast-ability pattern where a dimension in one tensor either matches or can be broadcast to (i.e. equals 1) a dimension in another tensor.

    @typeguard.typechecked
    def mwe(
        x: torchtyping.TensorType[
            ...,
            "foo",
            "bar", # How do we make this "match bar from arg_b or equal 1"?
        ],
        y: torchtyping.TensorType[
            "bar",
        ]) -> torch typing.TensorType[...,"foo","bar"]:
        return x * y
    

    Am I missing an existing way to do this in torchtyping out of the box? Would this need an extension?

    opened by SeanEaster 1
Owner
Patrick Kidger
Maths+ML PhD student at Oxford. Neural ODEs+SDEs+CDEs, time series, rough analysis. (Also ice skating, martial arts and scuba diving!)
Patrick Kidger
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

null 61.4k Jan 4, 2023
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

null 46.1k Feb 13, 2021
Tensors and neural networks in Haskell

Hasktorch Hasktorch is a library for tensors and neural networks in Haskell. It is an independent open source community project which leverages the co

hasktorch 920 Jan 4, 2023
Quantized tflite models for ailia TFLite Runtime

ailia-models-tflite Quantized tflite models for ailia TFLite Runtime About ailia TFLite Runtime ailia TF Lite Runtime is a TensorFlow Lite compatible

ax Inc. 13 Dec 23, 2022
NVIDIA container runtime

nvidia-container-runtime A modified version of runc adding a custom pre-start hook to all containers. If environment variable NVIDIA_VISIBLE_DEVICES i

NVIDIA Corporation 938 Jan 6, 2023
Conflict-aware Inference of Python Compatible Runtime Environments with Domain Knowledge Graph, ICSE 2022

PyCRE Conflict-aware Inference of Python Compatible Runtime Environments with Domain Knowledge Graph, ICSE 2022 Dependencies This project is developed

Websoft@NJU 7 May 6, 2022
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

ONNX Runtime is a cross-platform inference and training machine-learning accelerator. ONNX Runtime inference can enable faster customer experiences an

Microsoft 8k Jan 4, 2023
An executor that loads ONNX models and embeds documents using the ONNX runtime.

ONNXEncoder An executor that loads ONNX models and embeds documents using the ONNX runtime. Usage via Docker image (recommended) from jina import Flow

Jina AI 2 Mar 15, 2022
A Runtime method overload decorator which should behave like a compiled language

strongtyping-pyoverload A Runtime method overload decorator which should behave like a compiled language there is a override decorator from typing whi

null 20 Oct 31, 2022
Lowest memory consumption and second shortest runtime in NTIRE 2022 challenge on Efficient Super-Resolution

FMEN Lowest memory consumption and second shortest runtime in NTIRE 2022 on Efficient Super-Resolution. Our paper: Fast and Memory-Efficient Network T

null 33 Dec 1, 2022
Combining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT

CheXbert: Combining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT CheXbert is an accurate, automated dee

Stanford Machine Learning Group 51 Dec 8, 2022
3D AffordanceNet is a 3D point cloud benchmark consisting of 23k shapes from 23 semantic object categories, annotated with 56k affordance annotations and covering 18 visual affordance categories.

3D AffordanceNet This repository is the official experiment implementation of 3D AffordanceNet benchmark. 3D AffordanceNet is a 3D point cloud benchma

null 49 Dec 1, 2022
STEAL - Learning Semantic Boundaries from Noisy Annotations (CVPR 2019)

STEAL This is the official inference code for: Devil Is in the Edges: Learning Semantic Boundaries from Noisy Annotations David Acuna, Amlan Kar, Sanj

null 469 Dec 26, 2022
Synthetic LiDAR sequential point cloud dataset with point-wise annotations

SynLiDAR dataset: Learning From Synthetic LiDAR Sequential Point Cloud This is official repository of the SynLiDAR dataset. For technical details, ple

null 78 Dec 27, 2022
BoxInst: High-Performance Instance Segmentation with Box Annotations

Introduction This repository is the code that needs to be submitted for OpenMMLab Algorithm Ecological Challenge, the paper is BoxInst: High-Performan

null 88 Dec 21, 2022
Official implementation for the paper: "Multi-label Classification with Partial Annotations using Class-aware Selective Loss"

Multi-label Classification with Partial Annotations using Class-aware Selective Loss Paper | Pretrained models Official PyTorch Implementation Emanuel

null 99 Dec 27, 2022
[BMVC2021] The official implementation of "DomainMix: Learning Generalizable Person Re-Identification Without Human Annotations"

DomainMix [BMVC2021] The official implementation of "DomainMix: Learning Generalizable Person Re-Identification Without Human Annotations" [paper] [de

Wenhao Wang 17 Dec 20, 2022
68 keypoint annotations for COFW test data

68 keypoint annotations for COFW test data This repository contains manually annotated 68 keypoints for COFW test data (original annotation of CFOW da

null 31 Dec 6, 2022
PyTorch 1.5 implementation for paper DECOR-GAN: 3D Shape Detailization by Conditional Refinement.

DECOR-GAN PyTorch 1.5 implementation for paper DECOR-GAN: 3D Shape Detailization by Conditional Refinement, Zhiqin Chen, Vladimir G. Kim, Matthew Fish

Zhiqin Chen 72 Dec 31, 2022