Open Source Differentiable Computer Vision Library for PyTorch

Overview


License tests-cpu-versions tests-cuda-versions codecov PyPI version Documentation Status

Kornia is a differentiable computer vision library for PyTorch.

It consists of a set of routines and differentiable modules to solve generic computer vision problems. At its core, the package uses PyTorch as its main backend both for efficiency and to take advantage of the reverse-mode auto-differentiation to define and compute the gradient of complex functions.

Overview

Inspired by existing packages, this library is composed by a subset of packages containing operators that can be inserted within neural networks to train models to perform image transformations, epipolar geometry, depth estimation, and low-level image processing such as filtering and edge detection that operate directly on tensors.

At a granular level, Kornia is a library that consists of the following components:

Component Description
kornia a Differentiable Computer Vision library, with strong GPU support
kornia.augmentation a module to perform data augmentation in the GPU
kornia.color a set of routines to perform color space conversions
kornia.contrib a compilation of user contrib and experimental operators
kornia.enhance a module to perform normalization and intensity transformation
kornia.feature a module to perform feature detection
kornia.filters a module to perform image filtering and edge detection
kornia.geometry a geometric computer vision library to perform image transformations, 3D linear algebra and conversions using different camera models
kornia.losses a stack of loss functions to solve different vision tasks
kornia.morphology a module to perform morphological operations
kornia.utils image to tensor utilities and metrics for vision problems

Installation

From pip:

pip install kornia
Other installation options

From source:

python setup.py install

From source with symbolic links:

pip install -e .

From source using pip:

pip install git+https://github.com/kornia/kornia

Examples

Run our Jupyter notebooks tutorials to learn to use the library.

Cite

If you are using kornia in your research-related documents, it is recommended that you cite the paper. Se more in CITATION.

@inproceedings{eriba2019kornia,
  author    = {E. Riba, D. Mishkin, D. Ponsa, E. Rublee and G. Bradski},
  title     = {Kornia: an Open Source Differentiable Computer Vision Library for PyTorch},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2020},
  url       = {https://arxiv.org/pdf/1910.02190.pdf}
}

Contributing

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us. Please, consider reading the CONTRIBUTING notes. The participation in this open source project is subject to Code of Conduct.

Community

  • forums: discuss implementations, research, etc. GitHub Forums
  • GitHub issues: bug reports, feature requests, install issues, RFCs, thoughts, etc. OPEN
  • Slack: Join our workspace to keep in touch with our core contributors and be part of our community. JOIN HERE
  • for general information, please visit our website at www.kornia.org
Comments
  • adopt torch.testing.assert_close

    adopt torch.testing.assert_close

    Closes #1029. torch.testing.assert_close will only be shipped with torch==1.9. Thus, this PR should not be merged before.

    All files except for test/utils.py are 1-to-1 ports and probably do not need special review, but I let you be the judge of that.

    enhancement :rocket: 1 Priority 1 🚨 code heatlh :pill: 
    opened by pmeier 23
  • Add boxes V2 as discussed in #1142 (superseed #1177)

    Add boxes V2 as discussed in #1142 (superseed #1177)

    Changes

    This PR implements #1142 (issue) and superseeds #1177 (pr). It's the same code with the following differences:

    • It takes the object class approach discussed in #1142.
    • It creates two new classes: Boxes and Boxes3D. They are torchscript classes to allow to be used in traced / scripted code. Without using torch.jit.script decorator, I faced several strange behaviors when scripting.
    • Fixes gradient check bugs

    Check #1177 for differences with current kornia.geometry.bbox module.

    ~~#### TODO~~ ~~- Update docs~~ ~~- As suggested by shijianjian in #1177, better explanation and name for *_plus_1 in to_tensor and from_tensor.~~ ~~- Fix gradient check test fail for Boxes3D.to_tensor. I don't understand why it's happening as it's almost an exact copy of Boxes.to_tensor ~~ ~~- ¿Update CHANGELOG? Is this something I should do?~~

    TODO to do in future PRs):

    • Port kornia containers to use them. At least, expose bbox_v2. ~~- Check transforms. Care should be taken since transforms assumes +1 convention. At least, it happens with random flips. Check this article.~~ See #1398. As temporary fix, we use "+1" convention internally. See whole temporary fix is done in commit "07795f9c78468b42ecd96f4d5d2b8df562deecf7".

    Type of change

    • [x] 🔬 New feature (non-breaking change which adds functionality)

    Checklist

    • [x] My code follows the style guidelines of this project
    • [x] I have performed a self-review of my own code
    • [x] I have commented my code, particularly in hard-to-understand areas
    • [ ] I have made corresponding changes to the documentation
    • [x] My changes generate no new warnings
    • [ ] Did you update CHANGELOG in case of a major change?
    enhancement :rocket: 1 Priority 1 🚨 module: geometry 
    opened by hal-314 22
  • Make kornia.augmentation.random_crop_generator safer

    Make kornia.augmentation.random_crop_generator safer

    🚀 Feature

    Make kornia.augmentation.random_crop_generator safer.

    Motivation

    Within kornia.augmentation.random_crop_generator _adapted_uniform is used to generate random starting points for the crop. Since it returns a torch.float, the upper delimiter (high) is increased by one (high = x_diff + 1) and cast to torch.long, which is basically a floor division. While this generates a uniform distribution of the integers, it opens up an edge-case: what if Uniform.rsample (used from _adapted_uniform) returns exactly (or within the precision) high?

    from unittest import mock
    import torch
    from kornia.augmentation.random_generator import _adapted_uniform
    
    batch_size = 1
    input_size = (None, 2)
    size = (None, 1)
    same_on_batch = True
    
    x_diff = input_size[1] - size[1]
    
    
    with mock.patch(
        "kornia.augmentation.utils.Uniform.rsample",
        new=lambda self, shape: torch.ones(shape) * self.high,
    ):
        x_start = _adapted_uniform((batch_size,), 0, x_diff + 1, same_on_batch).long()
    
    print(f"x_start {'<=' if x_start <= x_diff else '>'} x_diff")
    

    x_start should lesser or equal to x_diff, but as you can see

    x_start > x_diff
    

    Pitch

    Although this edge case is unlikely, I think we should implement this properly. Especially since the fix is trivial:

    from typing import Tuple, Union
    
    
    def _adapted_uniform_int(
        shape: Union[Tuple, torch.Size],
        low: Union[float, torch.Tensor],
        high: [float, torch.Tensor],
        same_on_batch: bool = False,
    ) -> torch.Tensor:
        return _adapted_uniform(shape, low, high + 1 - 1e-6, same_on_batch).int()
    
    
    with mock.patch(
        "kornia.augmentation.utils.Uniform.rsample",
        new=lambda self, shape: torch.ones(shape) * self.high,
    ):
        x_start = _adapted_uniform_int((batch_size,), 0, x_diff, same_on_batch).long()
    
    print(f"x_start {'<=' if x_start <= x_diff else '>'} x_diff")
    
    x_start <= x_diff
    

    _adapted_uniform_int has the same signature as _adapted_uniform. The only difference to the above call is that we subtract a small constant from high + 1 to prevent it from being drawn exactly. As long as this constant 1e-6 is smaller than torch.finfo(torch.float).eps ~= 1.19e-7 this should be fine.

    Additional Context

    I've encountered this while working with kornia.augmentation.random_crop_generator. If this used in other places throughout the code base the fix should be applicable everywhere.

    bug :bug: wontfix 3 Priority 3 :palm_tree: 
    opened by pmeier 22
  • Simple draw rectangle implementation for feat #607 https://github.com…

    Simple draw rectangle implementation for feat #607 https://github.com…

    …/kornia/kornia/issues/607

    Description

    As part of feat #607, this is a basic implementation of non differentiable draw rectangle.

    Status

    WIP

    Types of changes

    • [x] New tests added to cover the changes
    • [x] New Feature

    PR Checklist

    PR Implementer

    This is a small checklist for the implementation details of this PR.

    If there are any questions regarding code style or other conventions check out our summary.

    • [x] Did you discuss the functionality or any breaking changes before ?
    • [x] Pass all tests: did you test in local ? make test
    • [x] Unittests: did you add tests for your new functionality ?
    • [x] Documentations: did you build documentation ? make build-docs
    • [x] Implementation: is your code well commented and follow conventions ? make lint
    • [x] Docstrings & Typing: has your code documentation and typing ? make mypy
    • [x] Update notebooks & documentation if necessary -not necessary yet

    KorniaTeam

    KorniaTeam workflow
    • [ ] Assign correct label
    • [ ] Assign PR to a reviewer
    • [ ] Does this PR close an Issue? (add closes #IssueNumber at the bottom if not already in description)

    Reviewer

    Reviewer workflow
    • [ ] Do all tests pass? (Unittests, Typing, Linting, Documentation, Environment)
    • [ ] Does the implementation follow kornia design conventions?
    • [ ] Is the documentation complete enough ?
    • [ ] Are the tests covering simple and corner cases ?
    PR: Good to Merge :ok_hand: 1 Priority 1 🚨 
    opened by mmathew23 21
  • [Question] Significant difference in performance between Kornia and Torchvision image augmentations

    [Question] Significant difference in performance between Kornia and Torchvision image augmentations

    I have trained two models that use the same sequence of image augmentations but in Torchvision and Kornia and I’m observing a significant difference in the performance of these models. I understand that despite fixing random seeds, these augmentations might still be different which might cause some difference in the test accuracies, but on average, I assume that both of these models should end with similar accuracies, especially when these values are averaged over multiple seeds. However, this is not the case.

    # PyTorch transformation
    train_orig_transform = transforms.Compose([
        transforms.RandomResizedCrop(32),
        transforms.RandomHorizontalFlip(p=0.5),
        transforms.RandomApply([transforms.ColorJitter(0.4, 0.4, 0.4, 0.1)], p=0.8),
        transforms.RandomGrayscale(p=0.2),
        transforms.ToTensor(),
        transforms.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010])
        ])
    

    This is the Kornia version of the above PyTorch transformation

    
    class KorniaAugmentationModule(nn.Module):
        def __init__(self, batch_size=512):
            super().__init__()
            # These are standard values for CIFAR10
            self.mu = torch.Tensor([0.4914, 0.4822, 0.4465])
            self.sigma = torch.Tensor([0.2023, 0.1994, 0.2010])
    
            self.hor_flip_prob = 0.5
            self.jit_prob = 0.8
            self.gs_prob = 0.2
    
            self.crop = K.RandomResizedCrop(size=(32, 32), same_on_batch=False)
            self.hor_flip = K.RandomHorizontalFlip(p=self.hor_flip_prob, same_on_batch=False)
            self.jit = K.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1, p=self.jit_prob, same_on_batch=False)
            self.rand_grayscale =  K.RandomGrayscale(p=self.gs_prob, same_on_batch=False)
            
            self.normalize = K.Normalize(self.mu, self.sigma)
    
        # Note that I should only normalize in test mode; no other type of augmentation should be performed
        
        def forward(self, x, params=None, mode='train'):
            B = x.shape[0]
            if mode == 'train':
                    x = self.crop(x, params['crop_params'])
                    x = self.hor_flip(x, params['hor_flip_params'])
                    x[params['jit_batch_probs']] = self.jit(x[params['jit_batch_probs']], params['jit_params'])
                    x = self.rand_grayscale(x, params['grayscale_params'])
    
            x = self.normalize(x)
            return x
    
    

    Rest of the code for training and testing these models is shared.

    These are the training loss and testing accuracy curves for kornia (orange) and torchvision (green)

    Screen Shot 2020-09-27 at 6 56 35 PM

    The difference in test accuracies between the two models is nearly ~11% which is very significant.

    I have posted this question on PyTorch discussion forum as well. Could you please give pointers on why this behavior is being observed, is this expected and if not, what could be ways to debug this?

    question :question: 
    opened by ashwinipokle 21
  • NeRF

    NeRF

    Changes

    This pull request concerns a draft version of the NeRF algorithm for view synthesis and 3D rendering.

    In this PR I included an API class - CameraCalibration, which drives NeRF, and is designed based on the discussion in #1384.

    I took a brute force approach and copied the relevant core algorithms from the paper: Wang et al. (2021) - https://arxiv.org/abs/2102.07064. Most algorithm parameters are hard-coded, whereas some of those can be defined in CameraCalibration API.

    I tested the full functionality of CameraCalibration in Colab - https://colab.research.google.com/drive/1xiX1Jf572HAN_TN4p-8IPU3CZRRkfhox#scrollTo=yVT7jsvk4ViS. Please let me know if you can run this notebook, or at least able to copy it and modify the relevant paths to make it work with the NeRF version in this PR.

    The next steps in this PR, and I foresee, and please comment on each point:

    1. Parameterize the algorithm and export more control to CameraCalibration API (including, e.g., CPU/GPU control, rendering parameters, etc.)

    2. Refactor the code I took from https://arxiv.org/abs/2102.07064 to follow Kornia's standards

    3. Replace core engine component with of-the-shelve algorithms. Here I mainly refer to the rendering part, which is also the most compute intensive. I'm thinking PyTorch3D may have some good alternatives we may want to explore

    Yaniv

    Type of change

    • [ ] 📚 Documentation Update
    • [ ] 🧪 Tests Cases
    • [ ] 🐞 Bug fix (non-breaking change which fixes an issue)
    • [x] 🔬 New feature (non-breaking change which adds functionality)
    • [ ] 🚨 Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [ ] 📝 This change requires a documentation update

    Checklist

    • [x] My code follows the style guidelines of this project
    • [x] I have performed a self-review of my own code
    • [ ] I have commented my code, particularly in hard-to-understand areas
    • [ ] I have made corresponding changes to the documentation
    • [ ] My changes generate no new warnings
    • [ ] Did you update CHANGELOG in case of a major change?
    wip 🛠️ 
    opened by YanivHollander 20
  • [Bug] warp_perspective does not return the input if used with identity matrix

    [Bug] warp_perspective does not return the input if used with identity matrix

    🐛 Bug

    The output of kornia.geometry.warp_perspective does not equal the input if the identity matrix is used.

    To Reproduce

    Steps to reproduce the behavior:

    import torch
    from kornia.geometry import warp_perspective
    
    torch.manual_seed(0)
    
    dsize = (32, 16)
    src = torch.rand(1, 3, *dsize)
    M = torch.eye(3).unsqueeze(0)
    
    dst = warp_perspective(src, M, dsize)
    
    mae = torch.mean(torch.abs(dst - src))
    print(mae.item())
    
    0.14952071011066437
    

    Expected behavior

    0.0
    

    Environment

    kornia==0.4.1

    bug :bug: help wanted 1 Priority 1 🚨 module: geometry 
    opened by pmeier 20
  • refactor homogeneous transforms module

    refactor homogeneous transforms module

    This PR tries to fulfill the feature request in #74

    Implements:

    • relative_transformation
    • inverse_transformation
    • compose_transformations.

    Moves transform_points from module conversions to transformations.

    PR: Good to Merge :ok_hand: 
    opened by edgarriba 20
  • Supporting arbitrary number of leading dimensions

    Supporting arbitrary number of leading dimensions

    Description

    This is the PR for #1068. These first commits include:

    1. Added perform_keep_shape_video decorator. Utilizing _to_bchw and _to_bcdhw in the decorator.
    2. Fixed some bugs for functions supposed to support any number of leading dimensions, but not due to a hardcode bug.
    3. Added support for the kornia.enhance.adjust.

    Further, before moving on, I would like to discuss the following matters

    1. How can I write tests for arbitrary dimensions? Could you suggest some sample?
    2. JIT tests break due to *args and **kwargs in the decorators (currently I skip these tests).
    3. Some users' codes may break when inputs not have shape (b, c, h, w) because some functions use _to_bchw and _to_bcdhw.

    Please let me know your comments. Thanks!

    Status

    Work in progress

    Types of changes

    • [x] Bug fix (non-breaking change which fixes an issue)
    • [x] Breaking change (fix or new feature that would cause existing functionality to change)
    • [ ] New tests added to cover the changes
    • [ ] Docstrings/Documentation updated

    PR Checklist

    PR Implementer

    This is a small checklist for the implementation details of this PR.

    If there are any questions regarding code style or other conventions check out our summary.

    • [ ] Did you discuss the functionality or any breaking changes before ?
    • [ ] Pass all tests: did you test in local ? make test
    • [ ] Unittests: did you add tests for your new functionality ?
    • [ ] Documentations: did you build documentation ? make build-docs
    • [ ] Implementation: is your code well commented and follow conventions ? make lint
    • [ ] Docstrings & Typing: has your code documentation and typing ? make mypy
    • [ ] Update notebooks & documentation if necessary

    KorniaTeam

    KorniaTeam workflow
    • [ ] Assign correct label
    • [ ] Assign PR to a reviewer
    • [ ] Does this PR close an Issue? (add closes #IssueNumber at the bottom if not already in description)

    Reviewer

    Reviewer workflow
    • [ ] Do all tests pass? (Unittests, Typing, Linting, Documentation, Environment)
    • [ ] Does the implementation follow kornia design conventions?
    • [ ] Is the documentation complete enough ?
    • [ ] Are the tests covering simple and corner cases ?
    enhancement :rocket: 1 Priority 1 🚨 code heatlh :pill: 
    opened by justanhduc 19
  • add option for mean and std to be tuples

    add option for mean and std to be tuples

    Description

    I propose to allow the user to instantiate Normalize object with mean and std as tuples.

    Status

    Ready

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] Breaking change (fix or new feature that would cause existing functionality to change)
    • [x] New tests added to cover the changes
    • [ ] Docstrings/Documentation updated
    enhancement :rocket: 1 Priority 1 🚨 module: enhance 
    opened by JoanFM 19
  • [Bug] convert_points_from_homogeneous - NaN gradients in backward pass

    [Bug] convert_points_from_homogeneous - NaN gradients in backward pass

    I just experienced a NaN-gradient problem while doing a backward pass here: https://github.com/kornia/kornia/blob/4b0ae70f7f806c5eff5ab87f8b1f2d9ab4ff1e45/kornia/geometry/conversions.py#L99

    torch.where works absolutely fine, but if you have zero divisions you find yourself with NaN-gradients for sure 💩

    Here is a toy example:

    eps = 1e-8
    
    z_vec: torch.Tensor = torch.tensor([4., 6., 0., -3., 1e-9], requires_grad=True)
    
    scale: torch.Tensor = torch.where(
        torch.abs(z_vec) > eps,
        torch.tensor(1.) / z_vec,
        torch.ones_like(z_vec)
    )
    scale.backward(torch.ones_like(scale))
    

    And these are z_vec gradients: tensor([-0.0625, -0.0278, nan, -0.1111, -0.0000])

    For now my little hack is:

    ...
        # we check for points at infinity
        z_vec: torch.Tensor = points[..., -1:]
        if z_vec.requires_grad:
            def z_vec_backward_hook(grad: torch.Tensor) -> torch.Tensor:
                grad[grad != grad] = 0.
                return grad
            z_vec.register_hook(z_vec_backward_hook)
    ...
    

    But not sure if it's good enough.

    bug :bug: 
    opened by poxyu 19
  • Round trip `torch.save` / `torch.load` not working for `Hyperplane`

    Round trip `torch.save` / `torch.load` not working for `Hyperplane`

    Describe the bug

    Crashing when trying to save and reload a Hyperplane using torch:

    Traceback (most recent call last):
      File "save_hyperplane.py", line 16, in <module>
        plane = torch.load("./saved_plane.pt")
      File "/home/kyle/venv/lib/python3.8/site-packages/torch/serialization.py", line 789, in load
        return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
      File "/home/kyle/venv/lib/python3.8/site-packages/torch/serialization.py", line 1131, in _load
        result = unpickler.load()
      File "/home/kyle/venv/lib/python3.8/site-packages/kornia/core/tensor_wrapper.py", line 48, in __getattr__
        self.used_attrs.add(name)
      File "/home/kyle/venv/lib/python3.8/site-packages/kornia/core/tensor_wrapper.py", line 48, in __getattr__
        self.used_attrs.add(name)
      File "/home/kyle/venv/lib/python3.8/site-packages/kornia/core/tensor_wrapper.py", line 48, in __getattr__
        self.used_attrs.add(name)
      [Previous line repeated 993 more times]
    RecursionError: maximum recursion depth exceeded
    

    Reproduction steps

    $ python3 -m venv venv
    $ source venv/bin/activate
    (venv) $ pip install kornia
    (venv) $ pip install numpy
    (venv) $ python save_hyperplane.py
    

    Content of save_hyperplane.py

    from kornia.geometry.plane import Hyperplane
    from kornia.geometry.vector import Vec3, Scalar
    import torch
    
    plane = Hyperplane(
        Vec3(torch.tensor([0,0,1])), Scalar(torch.tensor(0.5))
    )
    
    torch.save(plane, "./saved_plane.pt")
    plane = torch.load("./saved_plane.pt")
    

    Expected behavior

    Hyperplane is able to be saved and reloaded with torch.save & torch.load

    Environment

    Collecting environment information...
    PyTorch version: 1.13.1+cu117
    Is debug build: False
    CUDA used to build PyTorch: 11.7
    ROCM used to build PyTorch: N/A
    
    OS: Ubuntu 20.04.5 LTS (x86_64)
    GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
    Clang version: Could not collect
    CMake version: version 3.16.3
    Libc version: glibc-2.31
    
    Python version: 3.8.10 (default, Nov 14 2022, 12:59:47)  [GCC 9.4.0] (64-bit runtime)
    Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.29
    Is CUDA available: True
    CUDA runtime version: Could not collect
    CUDA_MODULE_LOADING set to: LAZY
    GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Laptop GPU
    Nvidia driver version: 470.141.03
    cuDNN version: Could not collect
    HIP runtime version: N/A
    MIOpen runtime version: N/A
    Is XNNPACK available: True
    
    Versions of relevant libraries:
    [pip3] numpy==1.24.1
    [pip3] torch==1.13.1
    [conda] Could not collect
    

    Additional context

    No response

    help wanted 
    opened by Hackerman342 0
  • Fix numerical stability for binary focal loss

    Fix numerical stability for binary focal loss

    Changes

    Adds numerical stability test for focal loss. Loss is expected to be 0:

    • when logit is a large positive value and label is 1
    • when logit is a large negative value and label is 0

    Current master does not pass test (focal loss is numerically unstable), because log(sigmoid(x)) is used. PR simply replaces log(sigmoid(x)) with torch.nn.functional.logsigmoid, which seems to be quite stable. PR passes the test.

    Fixes # (issue) https://github.com/kornia/kornia/issues/2115

    Type of change

    • [x] 🧪 Tests Cases
    • [x] 🐞 Bug fix (non-breaking change which fixes an issue)

    Checklist

    • [x] My code follows the style guidelines of this project
    • [x] I have performed a self-review of my own code
    • [ ] ~I have commented my code, particularly in hard-to-understand areas~ Irrelevant, only minor and trivial changes
    • [ ] ~I have made corresponding changes to the documentation~ Irrelevant, docs changes are not needed
    • [x] My changes generate no new warnings
    • [ ] ~Did you update CHANGELOG in case of a major change?~ Irrelevant
    opened by zimka 0
  • Weird behavior of LongestMaxSize

    Weird behavior of LongestMaxSize

    Describe the bug

    Hello me again,

    I might be doing something wrong with the way I use kornia augmentations, please let me know if it is the case.

    I was expecting LongestMaxSize in kornia to perform similarily as the albumentation implementation. Meaning that I can throw any images with different shapes to the the transformation function and get an image with different shapes but similar ratios. The largest size being equal to the value given to LongestMaxSize.

    See bellow a small code sample that disturbs me.

    Reproduction steps

    import kornia.augmentation as K
    a = torch.ones((512, 256))
    b = torch.ones((512, 756))
    
    print("first try")
    transfo = K.LongestMaxSize(max_size=256, p=1.)
    
    print(transfo(a).shape)
    print(transfo(b).shape)
    
    print("second try")
    
    a = torch.ones((512, 256))
    b = torch.ones((512, 756))
    
    transfo = K.LongestMaxSize(max_size=256, p=1.)
    print(transfo(b).shape)
    print(transfo(a).shape)
    
    Outputs:
    first try
    torch.Size([1, 1, 256, 128])
    torch.Size([1, 1, 256, 128])
    second try
    torch.Size([1, 1, 173, 256])
    torch.Size([1, 1, 173, 256])
    

    Expected behavior

    I would expect to have the same values for the transformations no matter the order of the elements.

    ie transfo(a).shape == torch.Size([1, 1, 256, 128]) and transfo(b).shape ==torch.Size([1, 1, 173, 256])

    I am missing something here ?

    Environment

    kornia='0.6.9'
    torch='1.12.1+cu113'
    

    Additional context

    No response

    help wanted 
    opened by Optimox 3
  • Weird Behavior of PadTo and poor documentation

    Weird Behavior of PadTo and poor documentation

    Describe the bug

    Hi,

    First thank you for this great library, I'm currently trying to switch all my pipeline to kornia instead of albumentation.

    I was actually looking for a PadIfNeeded function in Kornia. I finally found that there exists a PadTo function (not available in the documentation website by the way).

    However, the behavior of the function is unclear when you pass a larger image "inside a smaller PadTo" (see bellow).

    There is also a small typo in the docstring saying : p: probability of the image being flipped.

    Reproduction steps

    import kornia.augmentation as K
    import torch
    a = torch.meshgrid(torch.arange(10), torch.arange(10))[0].float()
    
    transform = K.PadTo((5, 5), pad_value=0)
    
    transform(a)
    

    Expected behavior

    It seems that the transformation returns the upper left corner of the image, without any padding happening of course.

    I guess I would expect two possibilities:

    • raise an error: since the augmentation does not perform anything related to what it's supposed to do -> Error: PadSize should be larger than the input size.
    • return the original image: this would be equivalent to a pad if needed function.

    Environment

    kornia='0.6.9'
    torch='1.12.1+cu113'
    

    Additional context

    No response

    help wanted module: augmentations 
    opened by Optimox 4
  • AugmentationSequential: accept sample dict as input

    AugmentationSequential: accept sample dict as input

    🚀 Feature

    I would like AugmentationSequential to support a dictionary as input.

    Motivation

    I'm a TorchGeo developer. In TorchGeo, every dataset returns a sample dictionary like so:

    sample = {
        "input": torch.tensor(...),
        "mask": torch.tensor(...),
        "bbox": torch.tensor(...),
        ...
    }
    

    (the exact key names don't match at the moment, but we're working on standardizing those)

    Pitch

    With the feature I'm envisioning, the following would work:

    augs = AugmentationSequential(...)
    sample = augs(sample)
    

    The exact implementation details would still need to be worked out, but *args would go from Tensor to Union[Tensor, Dict[str, Tensor]]. The dictionary may contain keys that Kornia doesn't know how to support, and these should be ignored. If a sample dictionary contains a known key that the user doesn't want to transform, they can simply pass data_keys to override the default detection. If the input is a dict, the output should also be a dict. If implemented correctly, this feature will be backwards compatible with the old behavior so people can still pass these inputs in manually if they want to.

    Alternatives

    At the moment, to use Kornia augmentations, we have to use:

    augs = AugmentationSequential(..., data_keys=["input", "mask", "bbox", ...])
    sample["input"], sample["mask"], sample["bbox"], ... = augs(sample["input"], sample["mask"], sample["bbox"], ...)
    

    As you can see, this is much more verbose than necessary. There's no reason we need to duplicate the list of keys so many times.

    Additional context

    If this is something you would be interested in, I would be happy to submit a PR to support this. Just wanted to gauge interest first before working on it.

    @isaaccorley

    help wanted 
    opened by adamjstewart 5
Releases(v0.6.9)
  • v0.6.9(Dec 21, 2022)

    What's Changed

    • Quaternion pow bug fix (div by zero) by @cjpurackal in https://github.com/kornia/kornia/pull/1946
    • fix cuda init by @ducha-aiki in https://github.com/kornia/kornia/pull/1953
    • Bump accelerate from 0.13.1 to 0.13.2 by @dependabot in https://github.com/kornia/kornia/pull/1957
    • add kornia.testing api in docs by @edgarriba in https://github.com/kornia/kornia/pull/1954
    • Fix line numbers of included examples. by @colllin in https://github.com/kornia/kornia/pull/1950
    • [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in https://github.com/kornia/kornia/pull/1949
    • Feat/randombrightness contrast saturation hue by @duc12111 in https://github.com/kornia/kornia/pull/1955
    • Normalize with intrinsics by @ducha-aiki in https://github.com/kornia/kornia/pull/1727
    • Liegroups by @edgarriba in https://github.com/kornia/kornia/pull/1887
    • Add sepia by @johnnv1 in https://github.com/kornia/kornia/pull/1947
    • fix doctest in kornia.geometry.liegroup by @edgarriba in https://github.com/kornia/kornia/pull/1960
    • minor improvements to So3 by @cjpurackal in https://github.com/kornia/kornia/pull/1966
    • Documentation: proper Sørensen–Dice coefficient by @sergiev in https://github.com/kornia/kornia/pull/1961
    • use torch lts in doctest ci by @edgarriba in https://github.com/kornia/kornia/pull/1968
    • Add Hyperplane and Ray API by @edgarriba in https://github.com/kornia/kornia/pull/1963
    • Bump pytest from 7.1.3 to 7.2.0 by @dependabot in https://github.com/kornia/kornia/pull/1972
    • [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in https://github.com/kornia/kornia/pull/1975
    • drop python 3.6 by @johnnv1 in https://github.com/kornia/kornia/pull/1971
    • Add some ortho tests for so3 by @stevenlovegrove in https://github.com/kornia/kornia/pull/1970
    • fix some typing annotations by @johnnv1 in https://github.com/kornia/kornia/pull/1967
    • ZCA Whiteing demo by @marianna13 in https://github.com/kornia/kornia/pull/1932
    • doctest to minimal python 3.8 by @edgarriba in https://github.com/kornia/kornia/pull/1974
    • fix import in assert_close helper by @pmeier in https://github.com/kornia/kornia/pull/1982
    • Remove unnecessary configs by @johnnv1 in https://github.com/kornia/kornia/pull/1984
    • Remove mypy from running on tests by @johnnv1 in https://github.com/kornia/kornia/pull/1983
    • Remove some # type: ignore from kornia.feature by @johnnv1 in https://github.com/kornia/kornia/pull/1995
    • add quaternion to euler conversion by @edgarriba in https://github.com/kornia/kornia/pull/1994
    • Update google analytics is for G4 property by @edgarriba in https://github.com/kornia/kornia/pull/1999
    • implement kornia.geometry.linalg.euclidean_distance by @edgarriba in https://github.com/kornia/kornia/pull/2000
    • quaternion, so3 and se3 as non batched by @edgarriba in https://github.com/kornia/kornia/pull/1997
    • Bump accelerate from 0.13.2 to 0.14.0 by @dependabot in https://github.com/kornia/kornia/pull/2004
    • Remove unused type: ignore by @johnnv1 in https://github.com/kornia/kornia/pull/1998
    • Bump pytest-mypy from 0.10.0 to 0.10.1 by @dependabot in https://github.com/kornia/kornia/pull/2005
    • Join the gh-actions for docs by @johnnv1 in https://github.com/kornia/kornia/pull/2003
    • [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in https://github.com/kornia/kornia/pull/2010
    • [feat] liegroup so2 by @cjpurackal in https://github.com/kornia/kornia/pull/1973
    • add rotation and translation classmethods in se3 and so3 by @edgarriba in https://github.com/kornia/kornia/pull/2001
    • [feat] implement adjoint for liegroups by @cjpurackal in https://github.com/kornia/kornia/pull/2007
    • Fix typing errors by @johnnv1 in https://github.com/kornia/kornia/pull/2012
    • remove unused deepsource by @johnnv1 in https://github.com/kornia/kornia/pull/2016
    • So2 bug fix by @cjpurackal in https://github.com/kornia/kornia/pull/2015
    • use resample instead of mode argument in RandomElasticTransform per default by @JanSellner in https://github.com/kornia/kornia/pull/2017
    • Add reusable workflow to env setup and update CI's by @johnnv1 in https://github.com/kornia/kornia/pull/2009
    • Remove redudant casts by @johnnv1 in https://github.com/kornia/kornia/pull/2022
    • Fix type annotation for torch 1.13.0 by @johnnv1 in https://github.com/kornia/kornia/pull/2023
    • Drop pytorch 1.8 (LTS) support by @johnnv1 in https://github.com/kornia/kornia/pull/2024
    • Fix an error in match_smnn by @anstadnik in https://github.com/kornia/kornia/pull/2020
    • Remove deprecated code in kornia.augmentation by @johnnv1 in https://github.com/kornia/kornia/pull/2028
    • so2 tests update and cleanup by @cjpurackal in https://github.com/kornia/kornia/pull/2029
    • Fix PR action trigger by @johnnv1 in https://github.com/kornia/kornia/pull/2026
    • Set equal_nan to False in assert_close by @edgarriba in https://github.com/kornia/kornia/pull/1986
    • drop flake8 dependency by @johnnv1 in https://github.com/kornia/kornia/pull/2032
    • Improves performance of the slowest CPU tests by @johnnv1 in https://github.com/kornia/kornia/pull/2036
    • add default python and update pre-commit hooks by @johnnv1 in https://github.com/kornia/kornia/pull/2040
    • facedetector now returns a list of tensors containing the boxes x image by @lferraz in https://github.com/kornia/kornia/pull/2034
    • add random for liegroups by @cjpurackal in https://github.com/kornia/kornia/pull/2041
    • Add/ensure support for pytorch 1.13.0 by @johnnv1 in https://github.com/kornia/kornia/pull/2035
    • Update constants to be able to inherit types from get method by @johnnv1 in https://github.com/kornia/kornia/pull/2047
    • Pass along data_keys or extra_args in *ApplyInverse with containers by @miquelmarti in https://github.com/kornia/kornia/pull/2046
    • update mul for so2 by @cjpurackal in https://github.com/kornia/kornia/pull/2051
    • None for align_corners arg of resize op with nearest mode by @miquelmarti in https://github.com/kornia/kornia/pull/2049
    • Remove type ignore from the codebase by @johnnv1 in https://github.com/kornia/kornia/pull/2030
    • making RandomGaussianNoise play nicely on GPU by @nitaifingerhut in https://github.com/kornia/kornia/pull/2050
    • fix pep561 and remove deprecated license_file by @johnnv1 in https://github.com/kornia/kornia/pull/2057
    • Set padding mode to zeros for inverse of resize aug via crop by @miquelmarti in https://github.com/kornia/kornia/pull/2054
    • replacing .repeat(...) with .expand(...) by @nitaifingerhut in https://github.com/kornia/kornia/pull/2059
    • [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in https://github.com/kornia/kornia/pull/2065
    • Bump accelerate from 0.14.0 to 0.15.0 by @dependabot in https://github.com/kornia/kornia/pull/2058
    • Fix GHA macos queued and coverage upload by @johnnv1 in https://github.com/kornia/kornia/pull/2038
    • Fix F401 by @johnnv1 in https://github.com/kornia/kornia/pull/2067
    • enable fast_mode on grandchecks by @johnnv1 in https://github.com/kornia/kornia/pull/2069
    • BUGFIX: RandomMotionBlur is not deterministic when using self._params by @nitaifingerhut in https://github.com/kornia/kornia/pull/2068
    • Motion blur by @nitaifingerhut in https://github.com/kornia/kornia/pull/2075
    • bugfix: KORNIA_CHECK_SHAPE by @nitaifingerhut in https://github.com/kornia/kornia/pull/2076
    • [feat] Implement se2 by @cjpurackal in https://github.com/kornia/kornia/pull/2019
    • Fix f401 by @johnnv1 in https://github.com/kornia/kornia/pull/2077
    • remove type ignore by @johnnv1 in https://github.com/kornia/kornia/pull/2078
    • bug fix for getitem in liegroups and quaternion by @cjpurackal in https://github.com/kornia/kornia/pull/2079
    • Remove deprecated code in kornia.augmentation by @johnnv1 in https://github.com/kornia/kornia/pull/2052
    • [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in https://github.com/kornia/kornia/pull/2081
    • Disable fail-fast on CI by @johnnv1 in https://github.com/kornia/kornia/pull/2085
    • enable check_untyped_defs on mypy by @johnnv1 in https://github.com/kornia/kornia/pull/2086
    • enable disallow_any_generics on mypy by @johnnv1 in https://github.com/kornia/kornia/pull/2092
    • [feat] add vee to so2, se2 by @cjpurackal in https://github.com/kornia/kornia/pull/2091
    • Fix padding for random crops by @miquelmarti in https://github.com/kornia/kornia/pull/2087
    • fix failing tests related to solve_cast on torch 1.9 by @johnnv1 in https://github.com/kornia/kornia/pull/2066
    • Add TensorWrapper, Vector3, Scalar and improvements in fit_plane by @edgarriba in https://github.com/kornia/kornia/pull/1987
    • fix parameters generator to be reproducible (3D) by @johnnv1 in https://github.com/kornia/kornia/pull/2088
    • Fix tests on CUDA by @johnnv1 in https://github.com/kornia/kornia/pull/2098
    • [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in https://github.com/kornia/kornia/pull/2099
    • [feat] adjoint for se2, so2 by @cjpurackal in https://github.com/kornia/kornia/pull/2101
    • add trans, trans_x, trans_y + minor changes se2 by @cjpurackal in https://github.com/kornia/kornia/pull/2103

    New Contributors

    • @colllin made their first contribution in https://github.com/kornia/kornia/pull/1950
    • @sergiev made their first contribution in https://github.com/kornia/kornia/pull/1961
    • @stevenlovegrove made their first contribution in https://github.com/kornia/kornia/pull/1970
    • @JanSellner made their first contribution in https://github.com/kornia/kornia/pull/2017
    • @anstadnik made their first contribution in https://github.com/kornia/kornia/pull/2020

    Full Changelog: https://github.com/kornia/kornia/compare/v0.6.8...v0.6.9

    Source code(tar.gz)
    Source code(zip)
    kornia-0.6.9-py2.py3-none-any.whl(555.78 KB)
    kornia-0.6.9.tar.gz(386.35 KB)
  • v0.6.8(Oct 13, 2022)

    Highlights

    NeRF API

    In this release in we include an experimental kornia.nerf submodule with a high level API that implements a vanilla Neural Radiance Field (NeRF). Read more about the roadmap of this project: https://github.com/kornia/kornia/issues/1936 // contribution done by @YanivHollander

    from kornia.nerf import NerfSolver
    from kornia.geomtry.camera import PinholeCamera
    
     camera: PinholeCamera = create_one_camera(5, 9, device, dtype)
     img = create_red_images_for_cameras(camera, device)
    
     nerf_obj = NerfSolver(device=device, dtype=dtype)
     num_img_rays = 15
     nerf_obj.init_training(camera, 1.0, 3.0, False, img, num_img_rays, batch_size=5, num_ray_points=10, lr=1e-2)
     nerf_obj.run(num_epochs=10)
    
     img_rendered = nerf_obj.render_views(camera)[0].permute(2, 0, 1)
    

    ezgif com-gif-maker

    Improvements, docs and tutorials soon!

    Edge Detection

    Added kornia.contrib.EdgeDetection API that implements dexined: https://github.com/xavysp/DexiNed

    import kornia as K
    from kornia.contrib import EdgeDetection
    
    edge_detection = EdgeDetector().to(device)
    
    # preprocess
    img = K.image_to_tensor(frame, keepdim=False).to(device)
    img = K.color.bgr_to_rgb(img.float())
    
    # detect !
    with torch.no_grad():
        edges = edge_detection(img)
    
    img_vis = K.tensor_to_image(edges.byte())
    

    amiga_edge

    Image matching bugfixes:

    After testing kornia LoFTR and AdaLAM under big load, our users and we have experiences some bugs in corners cases, such as big images or no input correspondences, which caused pipeline to crash. Not anymore!

    • Fixes typo bug that influences LoFTR training by @georg-bn in https://github.com/kornia/kornia/pull/1854
    • Enlargen LoFTR positional encoding map if large images are input by @georg-bn in https://github.com/kornia/kornia/pull/1853
    • Make AdaLAM output match confidence by @ducha-aiki in https://github.com/kornia/kornia/pull/1862
    • fix AdaLAM crash by @ducha-aiki in https://github.com/kornia/kornia/pull/1881
    • Adalam fix2 by @ducha-aiki in https://github.com/kornia/kornia/pull/1888
    • No crash in local feature matching if empty tensor output by @ducha-aiki in https://github.com/kornia/kornia/pull/1890
    • Fix warning in AdaLAM by @Skydes in https://github.com/kornia/kornia/pull/1925

    Various kornia demos in gradio by community:

    See demos in our HuggingFace space: https://huggingface.co/kornia image

    • Added gradio Image Stitching demo link by @kadirnar in https://github.com/kornia/kornia/pull/1871
    • edge detection demo by @p-mishra1 in https://github.com/kornia/kornia/pull/1876
    • Added Hugging Face edge detection demo link by @ramon-rd in https://github.com/kornia/kornia/pull/1874
    • [docs] add gradio app html and embeddings in filters by @lappemic in https://github.com/kornia/kornia/pull/1883
    • Geometry image transform demo by @dvando in https://github.com/kornia/kornia/pull/1922
    • add spaces demo by @johko in https://github.com/kornia/kornia/pull/1905
    • add space demo for homography warping by @johko in https://github.com/kornia/kornia/pull/1924
    • created resize_antialias.html file by @gauthamk28 in https://github.com/kornia/kornia/pull/1877
    • I added html file for module Line Fitting by @kadirnar in https://github.com/kornia/kornia/pull/1886
    • Add edge detector and morphological operator demos in the rst docs files by @ramon-rd in https://github.com/kornia/kornia/pull/1884
    • Image registration demo by @marianna13 in https://github.com/kornia/kornia/pull/1897
    • [docs] Add total_variation_denoising gradio by @gagan3012 in https://github.com/kornia/kornia/pull/1880
    • [Docs] Refactor the embedded Gradio demos by @NimaBoscarino in https://github.com/kornia/kornia/pull/1901

    RANSAC improvements

    We have added homography-from-line-segments solver, as well as various speed-ups. We are not yet at OpenCV RANSAC quality level, more improvements to come :) But the line-solver is pretty unique! We also have example in our tutorials https://kornia-tutorials.readthedocs.io/en/latest/line_detection_and_matching_sold2.html

    image
    • Added homography from line segment correspondences by @ducha-aiki in https://github.com/kornia/kornia/pull/1851
    • RANSAC improvements by @ducha-aiki in https://github.com/kornia/kornia/pull/1435
    • Add get_perpendicular and get_closest_point_on_epipolar_line by @ducha-aiki in https://github.com/kornia/kornia/pull/1915
    • Fix svdvals usage by @ducha-aiki in https://github.com/kornia/kornia/pull/1926

    Apple Silicon M1 support is closer, CI improvements

    We are slowly working on being able to run kornia on M1. So far we have added possibility to test locally on M1 and mostly report Pytorch MPS backend crashes in various use-cases. Once this work is finished, we may provide some workarounds to have kornia-M1

    • remove conv3d from spatial_gradient by @ducha-aiki in https://github.com/kornia/kornia/pull/1898
    • Added possibility to run tests for mps locally by @ducha-aiki in https://github.com/kornia/kornia/pull/1716
    • CI update pytorch-->-1.12.1 by @ducha-aiki in https://github.com/kornia/kornia/pull/1892
    • lts is not supported on mac-os, separate it by @ducha-aiki in https://github.com/kornia/kornia/pull/1904
    • Update setup to declarative metadata by @johnnv1 in https://github.com/kornia/kornia/pull/1885
    • Bump pytest from 7.1.2 to 7.1.3 by @dependabot in https://github.com/kornia/kornia/pull/1860
    • add concurrency cancel-in-progress in cpu workflow by @edgarriba in https://github.com/kornia/kornia/pull/1865
    • Use --no-implicit-optional for type checking by @hauntsaninja in https://github.com/kornia/kornia/pull/1910

    Quaternion improvements

    Implemented Quaternion.slerp to interpolate between quaternions using quaternion arithmetic -- contributed by @cjpurackal

    import torch
    from kornia.geometry.quaternion import Quaternion
    
    q0 = Quaternion.identity(batch_size=1)
    q1 = Quaternion(torch.tensor([[1., .5, 0., 0.]]))
    q2 = q0.slerp(q1, .3)
    
    

    More augmentations!

    • [feat] Added Jigsaw Augmentation by @shijianjian in https://github.com/kornia/kornia/pull/1852
    • [Feat] Added AugmentationDispatcher by @shijianjian https://github.com/kornia/kornia/pull/191

    What's Changed

    • Added homography from line segment correspondences by @ducha-aiki in https://github.com/kornia/kornia/pull/1851
    • Fixes typo bug that influences LoFTR training by @georg-bn in https://github.com/kornia/kornia/pull/1854
    • Enlargen LoFTR positional encoding map if large images are input by @georg-bn in https://github.com/kornia/kornia/pull/1853
    • docs: clarify the relation of color_jitter and color_jiggle by @kunaltyagi in https://github.com/kornia/kornia/pull/1858
    • Bump pytest from 7.1.2 to 7.1.3 by @dependabot in https://github.com/kornia/kornia/pull/1860
    • [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in https://github.com/kornia/kornia/pull/1833
    • Binary focal loss: Use pre-computed probs and increase readability by @klieret in https://github.com/kornia/kornia/pull/1848
    • add concurrency cancel-in-progress in cpu workflow by @edgarriba in https://github.com/kornia/kornia/pull/1865
    • modifying add_weighted to accept Tensors for alpha/beta/gamma by @nitaifingerhut in https://github.com/kornia/kornia/pull/1868
    • Make AdaLAM output match confidence by @ducha-aiki in https://github.com/kornia/kornia/pull/1862
    • fix bugs shift_rgb by @duc12111 in https://github.com/kornia/kornia/pull/1861
    • Added gradio Image Stitching demo link by @kadirnar in https://github.com/kornia/kornia/pull/1871
    • edge detection demo by @p-mishra1 in https://github.com/kornia/kornia/pull/1876
    • Added Hugging Face edge detection demo link by @ramon-rd in https://github.com/kornia/kornia/pull/1874
    • fix AdaLAM crash by @ducha-aiki in https://github.com/kornia/kornia/pull/1881
    • [feat] Added Jigsaw Augmentation by @shijianjian in https://github.com/kornia/kornia/pull/1852
    • [docs] add gradio app html and embeddings in filters by @lappemic in https://github.com/kornia/kornia/pull/1883
    • created resize_antialias.html file by @gauthamk28 in https://github.com/kornia/kornia/pull/1877
    • Adalam fix2 by @ducha-aiki in https://github.com/kornia/kornia/pull/1888
    • Add edge detector and morphological operator demos in the rst docs files by @ramon-rd in https://github.com/kornia/kornia/pull/1884
    • I added html file for module Line Fitting by @kadirnar in https://github.com/kornia/kornia/pull/1886
    • No crash in local feature matching if empty tensor output by @ducha-aiki in https://github.com/kornia/kornia/pull/1890
    • CI update pytorch-->-1.12.1 by @ducha-aiki in https://github.com/kornia/kornia/pull/1892
    • Fix fail test by @ducha-aiki in https://github.com/kornia/kornia/pull/1896
    • Added possibility to run tests for mps locally by @ducha-aiki in https://github.com/kornia/kornia/pull/1716
    • remove conv3d from spatial_gradient by @ducha-aiki in https://github.com/kornia/kornia/pull/1898
    • lts is not supported on mac-os, separate it by @ducha-aiki in https://github.com/kornia/kornia/pull/1904
    • [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in https://github.com/kornia/kornia/pull/1900
    • quaternion index bug fix by @cjpurackal in https://github.com/kornia/kornia/pull/1903
    • Update setup to declarative metadata by @johnnv1 in https://github.com/kornia/kornia/pull/1885
    • [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in https://github.com/kornia/kornia/pull/1912
    • Image registration demo by @marianna13 in https://github.com/kornia/kornia/pull/1897
    • [docs] Add total_variation_denoising gradio by @gagan3012 in https://github.com/kornia/kornia/pull/1880
    • [Docs] Refactor the embedded Gradio demos by @NimaBoscarino in https://github.com/kornia/kornia/pull/1901
    • Add get_perpendicular and get_closest_point_on_epipolar_line by @ducha-aiki in https://github.com/kornia/kornia/pull/1915
    • Use --no-implicit-optional for type checking by @hauntsaninja in https://github.com/kornia/kornia/pull/1910
    • Add Random Gamma and test by @duc12111 in https://github.com/kornia/kornia/pull/1837
    • Add device query to Pinhole class by @YanivHollander in https://github.com/kornia/kornia/pull/1760
    • Quaternion from axis angle representation by @cjpurackal in https://github.com/kornia/kornia/pull/1917
    • NeRF Implementation by @YanivHollander in https://github.com/kornia/kornia/pull/1911
    • Bump pytest-mypy from 0.9.1 to 0.10.0 by @dependabot in https://github.com/kornia/kornia/pull/1919
    • Geometry image transform demo by @dvando in https://github.com/kornia/kornia/pull/1922
    • add spaces demo by @johko in https://github.com/kornia/kornia/pull/1905
    • add space demo for homography warping by @johko in https://github.com/kornia/kornia/pull/1924
    • RANSAC improvements by @ducha-aiki in https://github.com/kornia/kornia/pull/1435
    • Fix warning in AdaLAM by @Skydes in https://github.com/kornia/kornia/pull/1925
    • Fix svdvals usage by @ducha-aiki in https://github.com/kornia/kornia/pull/1926
    • Bump pytest-cov from 3.0.0 to 4.0.0 by @dependabot in https://github.com/kornia/kornia/pull/1918
    • fix shift_rgb stack dimension by @nmichlo in https://github.com/kornia/kornia/pull/1930
    • Bump accelerate from 0.12.0 to 0.13.1 by @dependabot in https://github.com/kornia/kornia/pull/1937
    • [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in https://github.com/kornia/kornia/pull/1939
    • [Fix] Fixed mypy warnings by @shijianjian in https://github.com/kornia/kornia/pull/1920
    • Update kernels.py by @farhankhot in https://github.com/kornia/kornia/pull/1940
    • Quaternion.norm bug fix by @cjpurackal in https://github.com/kornia/kornia/pull/1935
    • [Feat] Added AugmentationDispatcher by @shijianjian in https://github.com/kornia/kornia/pull/1914
    • Add EdgeDetection api by @edgarriba in https://github.com/kornia/kornia/pull/1483
    • [feat] slerp implementation for Quaternion by @cjpurackal in https://github.com/kornia/kornia/pull/1931
    • add laplacian pyramid by @lafith in https://github.com/kornia/kornia/pull/1816
    • Fix quaternion doctests by @edgarriba in https://github.com/kornia/kornia/pull/1943
    • Remove unnecessary CI jobs by @johnnv1 in https://github.com/kornia/kornia/pull/1933
    • fix cuda tests failing by @ducha-aiki in https://github.com/kornia/kornia/pull/1941

    New Contributors

    • @georg-bn made their first contribution in https://github.com/kornia/kornia/pull/1854
    • @kunaltyagi made their first contribution in https://github.com/kornia/kornia/pull/1858
    • @klieret made their first contribution in https://github.com/kornia/kornia/pull/1848
    • @duc12111 made their first contribution in https://github.com/kornia/kornia/pull/1861
    • @kadirnar made their first contribution in https://github.com/kornia/kornia/pull/1871
    • @p-mishra1 made their first contribution in https://github.com/kornia/kornia/pull/1876
    • @ramon-rd made their first contribution in https://github.com/kornia/kornia/pull/1874
    • @lappemic made their first contribution in https://github.com/kornia/kornia/pull/1883
    • @gauthamk28 made their first contribution in https://github.com/kornia/kornia/pull/1877
    • @cjpurackal made their first contribution in https://github.com/kornia/kornia/pull/1903
    • @johnnv1 made their first contribution in https://github.com/kornia/kornia/pull/1885
    • @marianna13 made their first contribution in https://github.com/kornia/kornia/pull/1897
    • @gagan3012 made their first contribution in https://github.com/kornia/kornia/pull/1880
    • @hauntsaninja made their first contribution in https://github.com/kornia/kornia/pull/1910
    • @dvando made their first contribution in https://github.com/kornia/kornia/pull/1922
    • @johko made their first contribution in https://github.com/kornia/kornia/pull/1905
    • @Skydes made their first contribution in https://github.com/kornia/kornia/pull/1925
    • @nmichlo made their first contribution in https://github.com/kornia/kornia/pull/1930
    • @farhankhot made their first contribution in https://github.com/kornia/kornia/pull/1940
    • @lafith made their first contribution in https://github.com/kornia/kornia/pull/1816

    Full Changelog: https://github.com/kornia/kornia/compare/v0.6.7...v0.6.8

    Source code(tar.gz)
    Source code(zip)
    kornia-0.6.8-py2.py3-none-any.whl(538.19 KB)
    kornia-0.6.8.tar.gz(376.42 KB)
  • v0.6.7(Aug 30, 2022)

    Highlights

    SOLD2 line segment detector & descriptor

    Contributed by SOLD2 original authors

    Geometry-aware matchers: AdaLAM & FGINN

    image Good old Lowe ratio-test is good for descriptor matching (implemented as `match_snn`, `match_smnn` in kornia, but it is often not enough: it does not take into account keypoint positions. With this version we started to add geometry aware descriptor matchers, starting with [FGINN](https://arxiv.org/abs/1503.02619) and [AdaLAM](https://arxiv.org/abs/2006.04250). Later we plan to add something like SuperGlue (but free version, ofc).

    AdaLAM works particularly well with kornia.feature.KeyNetAffNetHardNet. AdaLAM is adopted from original author's implementation.

    import matplotlib.pyplot as plt
    import cv2
    import kornia as K
    import kornia.feature as KF
    import numpy as np
    import torch
    from kornia_moons.feature import *
    
    def load_torch_image(fname):
        img = K.image_to_tensor(cv2.imread(fname), False).float() /255.
        img = K.color.bgr_to_rgb(img)
        return img
    
    device = K.utils.get_cuda_device_if_available()
    
    fname1 = 'kn_church-2.jpg'
    fname2 = 'kn_church-8.jpg'
    
    img1 = load_torch_image(fname1)
    img2 = load_torch_image(fname2)
    
    
    feature = KF.KeyNetAffNetHardNet(5000, True).eval().to(device)
    
    input_dict = {"image0": K.color.rgb_to_grayscale(img1), # LofTR works on grayscale images only 
                  "image1": K.color.rgb_to_grayscale(img2)}
    
    hw1 = torch.tensor(img1.shape[2:])
    hw2 = torch.tensor(img1.shape[2:])
    
    adalam_config = {"device": device}
    
    with torch.inference_mode():
        lafs1, resps1, descs1 = feature(K.color.rgb_to_grayscale(img1))
        lafs2, resps2, descs2 = feature(K.color.rgb_to_grayscale(img2))
        dists, idxs = KF.match_adalam(descs1.squeeze(0), descs2.squeeze(0),
                                      lafs1, lafs2, # Adalam takes into account also geometric information
                                      config=adalam_config,
                                      hw1=hw1, hw2=hw2) # Adalam also benefits from knowing image size
    

    More - in our Tutorials section

    Geometry conversions

    Converting camera pose from (R,t) to actually pose in world coordinates can be a pain. We are relieving you from it, by implementing various conversion functions, such as camtoworld_to_worldtocam_Rt, worldtocam_to_camtoworld_Rt, camtoworld_graphics_to_vision_4x4, etc. The conversions come with two variants: for (R,t) tensor tuple, or with since extrinsics mat4x4.

    Quaternion API

    More geometry-related stuff! We have added Quaternion API to make work with rotation representations easy. Checkout the PR

    >>> q = Quaternion.identity(batch_size=4)
    >>> q.data
    Parameter containing:
    tensor([[1., 0., 0., 0.],
            [1., 0., 0., 0.],
            [1., 0., 0., 0.],
            [1., 0., 0., 0.]], requires_grad=True)
    >>> q.real
    tensor([[1.],
            [1.],
            [1.],
            [1.]], grad_fn=<SliceBackward0>)
    >>> q.vec
    tensor([[0., 0., 0.],
            [0., 0., 0.],
            [0., 0., 0.],
            [0., 0., 0.]], grad_fn=<SliceBackward0>)
    

    Mosaic Augmentation

    We recently included the RandomMosaic to mosaic image transforms and combine them into one output image. The output image is composed of the parts from each sub-image.

    The mosaic transform steps are as follows:

    • Concate selected images into a super-image.
    • Crop out the outcome image according to the top-left corner and crop size.
    >>> mosaic = RandomMosaic((300, 300), data_keys=["input", "bbox_xyxy"])
    >>> boxes = torch.tensor([[
    ...     [70, 5, 150, 100],
    ...     [60, 180, 175, 220],
    ... ]]).repeat(8, 1, 1)
    >>> input = torch.randn(8, 3, 224, 224)
    >>> out = mosaic(input, boxes)
    >>> out[0].shape, out[1].shape
    (torch.Size([8, 3, 300, 300]), torch.Size([8, 8, 4]))
    
    image

    Edge-aware blurring

    Thanks to @nitaifingerhut

    !wget https://github.com/kornia/data/raw/main/drslump.jpg
    
    import torch
    import kornia
    import cv2
    import matplotlib.pyplot as plt
    
    # read the image with OpenCV
    img: np.ndarray = cv2.imread('./drslump.jpg')
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    
    # convert to torch tensor
    data: torch.tensor = kornia.image_to_tensor(img, keepdim=False)/255.  # BxCxHxW
    data-=0.2*torch.rand_like(data).abs()
    
    plt.figure(figsize=(12,8))
    edge_blurred = kornia.filters.edge_aware_blur_pool2d(data, 19)
    plt.imshow(kornia.tensor_to_image(torch.cat([data, edge_blurred],axis=3)))
    
    
    image

    What's Changed

    • 0.8 is too strict for smnn matching, 0.95 is much better default by @ducha-aiki in https://github.com/kornia/kornia/pull/1807
    • fix bug and add scale coef by @ducha-aiki in https://github.com/kornia/kornia/pull/1808
    • No crash matching by @ducha-aiki in https://github.com/kornia/kornia/pull/1810
    • Added FGINN matching by @ducha-aiki in https://github.com/kornia/kornia/pull/1813
    • Added SOLD2 by @rpautrat https://github.com/kornia/kornia/pull/1507 https://github.com/kornia/kornia/pull/1844
    • disable accelerate for macos and pytorch 1.10.2 by @edgarriba in https://github.com/kornia/kornia/pull/1811
    • update flake8 to 5.0.4 and fixes by @edgarriba in https://github.com/kornia/kornia/pull/1818
    • Bump accelerate from 0.10.0 to 0.11.0 by @dependabot in https://github.com/kornia/kornia/pull/1802
    • Bump accelerate from 0.11.0 to 0.12.0 by @dependabot in https://github.com/kornia/kornia/pull/1820
    • [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in https://github.com/kornia/kornia/pull/1821
    • Allowing more than 3/4 dims for total_variation + adding reduction by @nitaifingerhut in https://github.com/kornia/kornia/pull/1815
    • edge aware blur2d by @nitaifingerhut in https://github.com/kornia/kornia/pull/1822
    • [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in https://github.com/kornia/kornia/pull/1828
    • Adds conversions between graphics and vision coordinate frames by @ducha-aiki in https://github.com/kornia/kornia/pull/1823
    • Add Quaternion API by @edgarriba in https://github.com/kornia/kornia/pull/1801
    • fix tests float16 module losses by @MrShevan in https://github.com/kornia/kornia/pull/1809
    • AdaLAM match filtering (clean) by @ducha-aiki in https://github.com/kornia/kornia/pull/1831
    • [Feat] Init Mosaic Augmentation by @shijianjian in https://github.com/kornia/kornia/pull/1713
    • Change embedded Gradio demo to use web component instead of iframe by @NimaBoscarino in https://github.com/kornia/kornia/pull/1835
    • fix tests and warnings by @MrShevan in https://github.com/kornia/kornia/pull/1834

    Full Changelog: https://github.com/kornia/kornia/compare/v0.6.6...v0.6.7

    Source code(tar.gz)
    Source code(zip)
    kornia-0.6.7-py2.py3-none-any.whl(551.83 KB)
    kornia-0.6.7.tar.gz(387.79 KB)
  • v0.6.6(Jul 16, 2022)

    Highlights

    ParametrizedLine API

    First of integrations to revamp kornia.geometry to align with Eigen and Sophus. Docs: https://kornia.readthedocs.io/en/latest/geometry.line.html?#kornia.geometry.line.ParametrizedLine See: example: https://github.com/kornia/kornia/blob/master/examples/geometry/fit_line2.py

    Figure_1

    Support for macos and windows in load_image

    Automated the packaging infra in kornia_rs to handle multi architecture builds. Arm64 soon :) See: https://github.com/kornia/kornia-rs

        # load the image using the rust backend          
        img: Tensor = K.io.load_image(file_name, K.io.ImageLoadType.RGB32)
        img = img[None]  # 1xCxHxW / fp32 / [0, 1]
    

    HuggingFacce integration

    Created Kornia AI org under the HuggingFace platform. Starting to port the tutorials under HuggingFace kornia org to rapidly show live docs and make community. Link: https://huggingface.co/kornia

    Demos:

    • kornia enhance: https://kornia.readthedocs.io/en/latest/enhance.html#interactive-demo
    • augmentations playground: https://huggingface.co/spaces/kornia/kornia-augmentations-tester

    What's new ?

    • update slack link by @edgarriba in https://github.com/kornia/kornia/pull/1719
    • fixes EarlyStoppping condition by @edgarriba in https://github.com/kornia/kornia/pull/1718
    • Fix warning: meshgrid need indexing argument by @FavorMylikes in https://github.com/kornia/kornia/pull/1629
    • Bump accelerate from 0.8.0 to 0.9.0 by @dependabot in https://github.com/kornia/kornia/pull/1720
    • fixes for half precision in imgwarp by @edgarriba in https://github.com/kornia/kornia/pull/1723
    • Fix transforms for empty boxes and keypoints inputs by @hal-314 in https://github.com/kornia/kornia/pull/1741
    • few mypy fixes by @edgarriba in https://github.com/kornia/kornia/pull/1724
    • Implement project and unproject in PinholeCamera by @YanivHollander in https://github.com/kornia/kornia/pull/1729
    • deprecate filter2D filter3D api by @edgarriba in https://github.com/kornia/kornia/pull/1725
    • fixing doctest in pinhole by @edgarriba in https://github.com/kornia/kornia/pull/1743
    • [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in https://github.com/kornia/kornia/pull/1742
    • Fix/crop transforms by @hal-314 in https://github.com/kornia/kornia/pull/1739
    • Fix Boxes.from_tensor(boxes, mode="vertices") by @hal-314 in https://github.com/kornia/kornia/pull/1740
    • adding rgb_to_y by @nitaifingerhut in https://github.com/kornia/kornia/pull/1734
    • fix typing callable in load storage by @edgarriba in https://github.com/kornia/kornia/pull/1768
    • Add rgb_to_y to all by @ashnair1 in https://github.com/kornia/kornia/pull/1762
    • Fix bug preventing sample wise augmentations by @ashnair1 in https://github.com/kornia/kornia/pull/1761
    • update pytorch ci matrix 1.10.2 and 1.11.0 by @edgarriba in https://github.com/kornia/kornia/pull/1771
    • docs: Fix a few typos by @timgates42 in https://github.com/kornia/kornia/pull/1774
    • Refactor and add tests in get_perspective_transform by @edgarriba in https://github.com/kornia/kornia/pull/1767
    • [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in https://github.com/kornia/kornia/pull/1776
    • update libfacedetection url path by @edgarriba in https://github.com/kornia/kornia/pull/1780
    • enable black in the precommit by @edgarriba in https://github.com/kornia/kornia/pull/1777
    • Bump kornia-rs from 0.0.2 to 0.0.5 by @dependabot in https://github.com/kornia/kornia/pull/1784
    • kornia io support for macos and win by @edgarriba in https://github.com/kornia/kornia/pull/1785
    • deploy docs to gh-pages by @edgarriba in https://github.com/kornia/kornia/pull/1787
    • update pytest 7.1.2; pytest-flake8 1.1.1; flake8 4.0.1 by @edgarriba in https://github.com/kornia/kornia/pull/1786
    • adding weights to positive examples by @MrShevan in https://github.com/kornia/kornia/pull/1765
    • [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in https://github.com/kornia/kornia/pull/1789
    • add KORNIA_CHECK_SAME_DEVICES by @MrShevan in https://github.com/kornia/kornia/pull/1788
    • Bump accelerate from 0.9.0 to 0.10.0 by @dependabot in https://github.com/kornia/kornia/pull/1748
    • Add sphinxcontrib.gtagjs to track docs by @edgarriba in https://github.com/kornia/kornia/pull/1790
    • Add an interactive demo to the kornia.enhance docs by @NimaBoscarino in https://github.com/kornia/kornia/pull/1793
    • Update the Gradio demo URL to point to Kornia HF org by @NimaBoscarino in https://github.com/kornia/kornia/pull/1795
    • Add ParametrizedLine and fit_line by @edgarriba in https://github.com/kornia/kornia/pull/1794
    • add link to interactive augmentations demo by @cceyda in https://github.com/kornia/kornia/pull/1797

    New Contributors

    • @FavorMylikes made their first contribution in https://github.com/kornia/kornia/pull/1629
    • @MrShevan made their first contribution in https://github.com/kornia/kornia/pull/1765
    • @NimaBoscarino made their first contribution in https://github.com/kornia/kornia/pull/1793

    Full Changelog: https://github.com/kornia/kornia/compare/v0.6.5...v0.6.6

    Source code(tar.gz)
    Source code(zip)
    kornia-0.6.6-py2.py3-none-any.whl(505.29 KB)
    kornia-0.6.6.tar.gz(351.61 KB)
  • v0.6.5(May 17, 2022)

    :rocket: [0.6.5] - 2022-05-16

    :new: New Features

    • Create kornia.io and implement load_image with rust (#1701)
    • Implement diamond_square and plasma augmentations: RandomPlasmaBrightness, RandomPlasmaContrast, RandomPlasmaShadow (#1700)
    • Added RandomRGBShift augmentation (#1694)
    • Added STE gradient estimator (#1666)
    • More epipolar geometry metrics (+linalg utility) (#1674)
    • Add Lovasz-Hinge/Softmax losses (#1682)
    • Add adjust_sigmoid and adjust_log initial implementation (#1685)
    • Added distribution mapper (#1667)

    :lady_beetle: Bug fixes

    • Fixes filter2d's output shape shrink when padding='same' (#1661)
    • fix: added eps in geometry/rotmat_to_quaternion (#1665)
    • [fix] receive num_features as an arg to KeyNetDetector constructor (#1686

    :zap: Improvements

    • Add reduction option to MS_SSIMLoss (#1655)
    • Making epipolar metrics work with volumetric tensors (#1656)
    • Add get_safe_device util (#1662)
    • Added antialiasing option to Resize augmentation (#1687)
    • Use nearest neighbour interpolation for masks (#1630)
    • grayscale to rgb for torch.uint8 (#1705)

    :woman_technologist: :man_technologist: We would like to thank all contributors for this new release ! @Jonas1312 @nitaifingerhut @qwertyforce @ashnair1 @ducha-aiki @z0gSh1u @simon-schaefer @shijianjian @edgarriba @HJoonKwon @ChristophReich1996 @Tanmay06 @dobosevych @miquelmarti @Oleksandra2020

    If we forgot someone let us know :sunglasses:

    Source code(tar.gz)
    Source code(zip)
    kornia-0.6.5-py2.py3-none-any.whl(500.79 KB)
    kornia-0.6.5.tar.gz(348.46 KB)
  • v0.6.4(Mar 21, 2022)

    :rocket: [0.6.4] - 2022-03-21

    :new: New Features

    • Adds MS-SSIMLoss reconstruction loss function (#1551)
    • Added HyNet descriptor (#1573)
    • Add KeyNet detector (#1574)
    • Add RandomPlanckianJitter in color augmentations (#1607)
    • Add Jina AI QAbot to Kornia documentation (#1628)
    • Add draw_convex_polygon (#1636)

    :lady_beetle: Bug fixes

    • RandomCrop fix and improvement (#1571)
    • Fix draw_line produce wrong output for coordinates larger than uint8
    • Fix mask bug for loftr (#1580)
    • Fix gradient bug for distance_transform (#1584)
    • Fix translation sampling in AffineGenerator3D (#1581)
    • Fix AugmentationSequential bbox keypoints transformation fix (#1570)
    • Fix CombineTensorPatches (#1558)
    • Fix overblur in AA (#1612)

    :exclamation: Changes

    • Deprecated return_transform, enabled 3D augmentations in AugmentionSequential (#1590)

    :zap: Improvements

    • Making compute_correspond_epilines work with fundamental and point of volumetric tensor (#1585)
    • Update batch shape when augmentations change size of image (#1609)
    • Remap accepts arbitrary grid size (#1617)
    • Rename variables named 'input' to 'sample' (in tests). (#1614)
    • Remove half log2 in extract_patches (#1616)
    • Add orientation-preserving option for AffNet and make it default (#1620)
    • Add option for sampling_method in 2d perspective transform generation (#1591) (#1592)
    • Fix adjust brightness (#1586)
    • Added default params for laf construction from xy and new tensor shape check (#1633)
    • Make nms2d jittable (#1637)
    • Add fn to automatically compute padding (#1634)
    • Add pillow_like option for ColorJitter to match torchvision. (#1611)

    :woman_technologist: :man_technologist: We would like to thank all contributors for this new release ! @ducha-aiki @edgarriba @shijianjian @juliendenize @ashnair1 @KhaledSharif @Parskatt @shazhou2015 @JoanFM @nrupatunga @kristijanbartol @miquelmarti @riegerfr @nitaifingerhut @dichen-cd @lamhoangtung @hasibzunair @wendy-xiaozong @rsomani95 @huuquan1994 @twsl

    If we forgot someone let us know :sunglasses:

    Source code(tar.gz)
    Source code(zip)
    kornia-0.6.4-py2.py3-none-any.whl(481.81 KB)
    kornia-0.6.4.tar.gz(335.72 KB)
  • v0.6.3(Jan 31, 2022)

    :rocket: [0.6.3] - 2022-01-30

    :new: New Features

    • Update CI to pytorch 1.10.1 (#1518)
    • Added Hanning kernel, prepare for KCF tracking (#1519)
    • Add distance transform implementation (#1490)
    • Add Resize augmentation module (#1545)

    :lady_beetle: Bug fixes

    • Precompute padding parameters when RandomCrop aug in container (#1494)
    • Padding error with RandomCrop #1520
    • Fix correct shape after cropping when forwarding parameters (#1533)
    • Fixed #1534 nested augmentation sequential bug (#1536)
    • Fixes to device in augmentations (#1546)
    • Bugfix for larger MotionBlur kernel size ranges (#1543)
    • Fix RandomErasing applied to mask keys (#1541)

    :exclamation: Changes

    • Restructure augmentation package (#1515)

    :zap: Improvements

    • Add missing keepdims with fixed type (#1488)
    • Allow to pass a second K to distort and undistort points (#1506)
    • Augmentation Sequential with a list of bboxes as a batch (#1497)
    • Adde Devcontainer for development (#1515)
    • Improve the histogram_matching function (#1532)

    :woman_technologist: :man_technologist: We would like to thank all contributors for this new release ! @ducha-aiki @edgarriba @shijianjian @julien-blanchon @lferraz @miquelmarti @twsl @nitaifingerhut @eungbean @aaroswings @huuquan1994 @rsomani95

    If we forgot someone let us know :sunglasses:

    Source code(tar.gz)
    Source code(zip)
    kornia-0.6.3-py2.py3-none-any.whl(463.62 KB)
    kornia-0.6.3.tar.gz(321.18 KB)
  • v0.6.2(Dec 3, 2021)

    :rocket: [0.6.2] - 2021-12-03

    :new: New Features

    • Add face detection API (#1469)
    • Add ObjectDetectorTrainer (#1414)
    • Add container operation weights and OneOf documentation (#1443)
    • Add oriented contraint check to Homography RANSAC (#1453)
    • Add background color selection in warp_perspective (#1452)
    • Add draw_line image utility (#1456)
    • Add Bounding Boxes API (#1304)
    • Add histogram_matching functionality (#1395)

    :lady_beetle: Bug fixes

    • fix catch type for torch.svd error (#1431)
    • Fix for nested AugmentationSequential containers (#1467)
    • Use common bbox format xywh (#1472)

    :exclamation: Changes

    • Add padding_mode for RandomElasticTransform augmentation (#1439)
    • Expose inliers sum to HomographyTracker (#1463)

    :zap: Improvements

    • Switch to one-way error RANSAC for speed-up (#1454)
    • Few improvements on homography tracking (#1434)
    • Enable all bandit tests, add separate hook for tests (#1437)
    • Merge homography_warp to warp_perspective (#1438)
    • Random generator refactor (#1459)

    :woman_technologist: :man_technologist: We would like to thank all contributors for this new release ! @ducha-aiki @edgarriba @chinhsuanwu @chinhsuanwu @dobosevych @shijianjian @rvorias @rvorias @fmiotello @hal-314 @trysomeway @miquelmarti @calmdown13 @twsl Abdelrhman-Hosny

    If we forgot someone let us know :sunglasses:

    Source code(tar.gz)
    Source code(zip)
    kornia-0.6.2-py2.py3-none-any.whl(391.70 KB)
    kornia-0.6.2.tar.gz(310.50 KB)
  • v0.6.1(Oct 22, 2021)

  • v0.6.0(Oct 22, 2021)

    :rocket: Release Note (0.6.0)

    Release time: 2021-10-22

    :new: New Features

    • Add Training API (#1307)
    • Added combine patches (#1309)
    • Add semantic segmentation trainer (#1323)
    • Add vanilla LO-RANSAC (#1335)
    • Add Lambda function module (#1346)
    • Add support for YUV420 and YUV422 to complement current YUV444 (#1360)
    • Add raw to rgb color conversion (#1380)
    • Implement separable_filter2d (#1385)
    • Add MobileViT to contrib (#1388)
    • Add solve_pnp_dlt (#1349)
    • Add function image_list_to_tensor to utils (#1393)
    • Add undistort_image function (#1303)
    • Create kormia.metrics submodule (#1325)
    • Add Image Stitching API (#1358)
    • Add Homography Tracker API (#1389)

    :exclamation: Changes

    • Refactor library namespaces [pre-release][0.6-rc1] (#1412)
    • Deprecate PyTorch 1.6/1.7 and add 1.9.1 (#1399)

    :zap: Improvements

    • Improve bbox_to_mask (#1351)
    • Refactor unfold->conv for morphology backbone (#1107)
    • Improve focal loss for numerical stability (#1362)
    • Add more border_type options for filter2D (#1375)
    • Replace deprecated torch.qr (#1376)
    • Add special case hardcoded implementtion for local features speed up (#1387)
    • Enable non/batched connected components (#1193)
    • Remove warnings during testing (#1401)

    :lady_beetle: Bug fixes

    • Fix binary focal loss (#1313)
    • Fix kornia.geometry.subpix.spatial_soft_argmax imports (#1318)
    • Fixed a simple typo in init.py (#1319)
    • Fix path to dev requirements file in a setup_dev_env.sh (#1324)
    • Fix bug in create_meshgrid3d along depth (#1330)
    • Fix anisotropic scale error (#1340)
    • Fix rgb_to_hsv for onnx (#1329)
    • Fixed useless return in ransac.py (#1352)
    • Fixed classificationhead typo and leave out some of the guesswork (#1354)
    • Fix clahe differentiability and tests (#1356)
    • Fixes singular matrix inverse/solve for RANSAC and ConvQuad3d (#1408)
    • Change intermediate datatype to fix imgwarp (#1413)

    :woman_technologist: :man_technologist: We would like to thank all contributors for this new release ! @AK391 @cclauss @edgarriba @ducha-aiki @isaaccorley @justanhduc @jatentaki @shijianjian @shiyangc-intusurg @SravanChittupalli @thatbrguy @nvshubhsharma @PWhiddy @oskarflordal @tacoelho @YanivHollander @jhacsonmeza

    If we forgot someone let us know :sunglasses:

    Source code(tar.gz)
    Source code(zip)
    kornia-0.6.0-py2.py3-none-any.whl(358.50 KB)
    kornia-0.6.0.tar.gz(277.86 KB)
  • 0.5.11(Sep 19, 2021)

    :rocket: Release Note (0.5.11)

    Release time: 2021-09-19

    :new: New Features

    • Add Vision Transformer (ViT) (#1296)
    • Add ImageRegistrator API (#1253)
    • Add LoFTR inference (#1218)
    • Added differentiable Hausdorff Distance (HD) loss (#1254)
    • Add PadTo to kornia.augmentation (#1286)

    :zap: Code refactor

    • Return all learned modules by default in eval() mode (#1266)
    • Enable ImageSequential and VideoSequential to AugmentationSequential (#1231)
    • Specify that angles are in radians (#1287)
    • Removed deprecated codes for v6.0 (#1281)

    :lady_beetle: Bug fixes

    • Fix save_pointcloud_ply fn counting point with inf coordinates (#1263)
    • Fixes torch version parse and add temporal packaging dependency (#1284)
    • Fix issue of image_histogram2d (#1295)

    :woman_technologist: :man_technologist: We would like to thank all contributors for this new release ! @Abdelrhman-Hosny @ducha-aiki @edgarriba @EStorm21 @lyhyl @shijianjian @thatbrguy

    If we forgot someone let us know :sunglasses:

    Source code(tar.gz)
    Source code(zip)
    kornia-0.5.11-py2.py3-none-any.whl(328.17 KB)
    kornia-0.5.11.tar.gz(254.24 KB)
  • 0.5.8(Aug 6, 2021)

    Kornia release

    [0.5.8] - 2021-08-06

    Added

    • Add the connected components labeling algorithm (#1184)

    Fixed

    • Partial fix for horizontal and vertical flips (#1166)
    • Fix even kernel and add test (#1183)
    • Fix wrong source points for RandomThinPlateSpline (#1187)
    • Fix RandomElasticTransform ignores same_on_batch (#1189)
    • Fixed bugs in patchsequential. Remove fill_diagonal operation for better ONNX support (#1178)

    Changed

    • Differentiable image histogram using kernel density estimation (#1172)

    Contributors

    @bkntr @bsuleymanov @ducha-aiki @edgarriba @hal-314 @kingsj0405 @shijianjian

    If we forgot someone let us know :sunglasses:

    Source code(tar.gz)
    Source code(zip)
    kornia-0.5.8-py2.py3-none-any.whl(296.83 KB)
    kornia-0.5.8.tar.gz(233.13 KB)
  • 0.5.7(Jul 26, 2021)

  • 0.5.6(Jul 12, 2021)

    Kornia 0.5.6 release

    [0.5.6] - 2021-07-12

    Added

    • Added mix augmentations in containers (#1139)

    Fixed

    • Fixed non-4-dim input error for sequential (#1146)

    Changed

    • Moving bbox-related functionality to bbox module (#1103)
    • Optimized version of hls_to_rgb and rgb_to_hls (#1154)

    Removed

    • Remove numpy dependency (#1136)

    Contributors

    @dkoguciuk @edgarriba @lferraz @shijianjian

    If we forgot someone let us know :sunglasses:

    Source code(tar.gz)
    Source code(zip)
    kornia-0.5.6-py2.py3-none-any.whl(294.10 KB)
    kornia-0.5.6.tar.gz(230.52 KB)
  • 0.5.5(Jun 27, 2021)

    Kornia 0.5.5 release

    [0.5.5] - 2021-06-27

    Added

    Changed

    • Change GaussianBlur to RandomGaussianBlur (#1118)
    • Update ci with pytorch 1.9.0 (#1120)
    • Changed option for mean and std to be tuples in normalization (#987)
    • Adopt torch.testing.assert_close (#1031)

    Removed

    • Remove numpy import (#1116)

    Contributors

    @copaah @ducha-aiki @edgarriba @eugene87222 @JoanFM @justanhduc @pmeier @shijianjian

    If we forgot someone let us know :sunglasses:

    Source code(tar.gz)
    Source code(zip)
    kornia-0.5.5-py2.py3-none-any.whl(285.53 KB)
    kornia-0.5.5.tar.gz(223.74 KB)
  • 0.5.4(Jun 11, 2021)

    Kornia 0.5.4 release

    [0.5.4] - 2021-06-11

    Added

    • Add Canny edge detection (#1020)
    • Added Batched forward function (#1058)
    • Added denormalize homography function (#1061)
    • Added more augmentations containers (#1014)
    • Added calibration module and Undistort 2D points function (#1026)
    • Added patch augmentation container (#1095)

    Fixed

    Changed

    • Resize regardless of number of dims, considering the last two dims as image (#1047)
    • Raise error if converting to unit8 image to gray with float weights (#1057)
    • Filter 2D->2d, 3D->3d (#1069)
    • Removed augmentation functional module. (#1067)
    • Make Morphology compatible with both OpenCV and Scipy (#1084)

    Contributors

    @asottile @Borda @ducha-aiki @edgarriba @jhacsonmeza @justanhduc @Manza12 @priba @shijianjian

    Special thanks to @Borda @carmocca @asottile for the help to improve the code health of the package.

    If we forgot someone let us know :sunglasses:

    Source code(tar.gz)
    Source code(zip)
    kornia-0.5.4-py2.py3-none-any.whl(278.97 KB)
    kornia-0.5.4.tar.gz(218.61 KB)
  • 0.5.3(May 30, 2021)

    Kornia 0.5.3 release

    [0.5.3] - 2021-05-29

    Added

    • Added inverse for augmentations (#1013)
    • Add advanced augmentations: RandomFisheye, RandomElasticTransform, RandomThinPlateSpline, RandomBloxBlur (#1015)

    Fixed

    • Correct Sobel test_noncontiguous. Nothing was tested before. (#1018)
    • Fixing #795: find_homography_dlt_iterated sometimes fails (#1022)

    Changed

    • Refactorization of the morphology package (#1034)
    • Optimised clipping in clahe and some other minor optimisation (#1035)

    Contributors

    @Borda @dkoguciuk @edgarriba @Manza12 @lferraz @priba @shijianjian

    If we forgot someone let us know :sunglasses:

    Source code(tar.gz)
    Source code(zip)
    kornia-0.5.3-py2.py3-none-any.whl(274.51 KB)
    kornia-0.5.3.tar.gz(215.01 KB)
  • 0.5.2(May 14, 2021)

    Kornia 0.5.2 release

    [0.5.2] - 2021-05-14

    Added

    • Added unsharp mask filtering (#1004)

    Fixed

    • Fixed angle axis to quaternion order bug (#926)
    • Fixed type error for lab_to_rgb conversion when using coremltools. (#1002)

    Changed

    • Mask with unbatched motion from essential choose solution (#998)

    thanks to all your contributions @amonszpart @AnimeshMaheshwari22 @askaradeniz @edgarriba @jatentaki

    The Kornia Team :nerd_face:

    Source code(tar.gz)
    Source code(zip)
  • 0.5.1(Apr 30, 2021)

    Kornia 0.5.1 release

    Highlights

    In this patch release we include the following features

    • Fast version of RandomCropResize, RandomCrop
    • Add antialias support in kornia.geometry.resize
    • Added HardNet8 deep features
    • Experimental support for torch.float16
    • Added the following modules for augmentations:
      • ImageToTensor
      • RandomInvert
      • RandomChannelShuffle
      • RandomGaussianNoise

    [0.5.1] - 2021-04-30

    Added

    • Added dtype for create_mesh (#919)
    • Added Hardnet8 (#955)
    • Added normalize boolean for remap (#921)
    • Added custom weights option for rgb2gray (#944)
    • Added fp16 support (#963)
    • Added ImageToTensor module and resize for non-batched images (#978)
    • Add more augmentations (#960)
    • Anti alias resize (#989)

    Changed

    • Improve kornia porphology (#965)
    • Improve cuda ci workflow speed (#975)
    • Refactor augmentation module (#948)
    • Implement fast version of crop function in augmentations (#967)
    • Implement missing jit ops in kornia.geometry.transform (#981)

    Fixed

    • Fixed RandomAffine translation range check (#917
    • Fixed the issue of NaN gradients by adding epsilon in focal loss (#924)
    • Allow crop size greater than input size. (#957)
    • Fixed RandomCrop bug (#951)

    Removed

    • Deprecate some augmentation functionals (#943)
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Mar 17, 2021)

    Kornia 0.5.0 release

    In this release we have focus in bringing more classic Computer Vision functionalities to the PyTorch ecosystem, like morphological operators and more diversity with Deep Local Descriptors, color conversions and drawing functions. In addition, we have worked towards improving the integration with TPU and better support with Torchscript.

    Highlights

    Morphological Operators

    As a highlight we include a kornia.morphology that implements several functionalities to work with morphological operators on high-dimensional tensors and differentiability. Contributed by @Juclique

    Morphology implements the following methods: dilation, erosion, open, close, close, gradient, top_hat and black_hat.

    from kornia import morphology as morph
    
    dilated_image = morph.dilation(tensor, kernel) # Dilation
    plot_morph_image(dilated_image) # Plot
    

    image

    See a full tutorial here: https://github.com/kornia/tutorials/blob/master/source/morphology_101.ipynb

    Deep Descriptors

    We have added a set of local feature-related models: MKDDescriptor #841 by implemented and ported to kornia by @manyids2; also we ported TFeat, AffNet, OriNet from authors repos #846.

    Here is notebook, showing the usage and benefits of new features. We also show how to seamlessly integrate kornia and opencv code via new conversion library kornia_moons.

    Also: exposed set_laf_orientation function #869

    image

    Video Augmentations

    We include a new operator to perform augmentations with videos VideoSequential. The module is based in nn.Sequential and has the ability to concatenate our existing kornia.augmentations for multi-dimensional video tensors. Contributed by @shijianjian

    import kornia
    import torchvision
    
    clip, _, _ = torchvision.io.read_video("drop.avi")
    clip = clip.permute(3, 0, 1, 2)[None] / 255.  # To BCTHW
    input = torch.randn(2, 3, 1, 5, 6).repeat(1, 1, 4, 1, 1)
    
    aug_list = VideoSequential(
        kornia.augmentation.ColorJitter(0.1, 0.1, 0.1, 0.1, p=1.0),
        kornia.augmentation.RandomAffine(360, p=1.0),
        data_format="BTCHW",
        same_on_frame=False)
    )
    
    out = aug(input)
    

    image

    See a full example in the following Colab: https://colab.research.google.com/drive/12dmHNkvEQrG-PHElbCXT9FgCr_aAGQSI?usp=sharing

    Draw functions

    We include an experimental functionality draw rectangle implemented in pure torch.tensor. Contributed by @mmathew23

    rects = torch.tensor([[[110., 50., 310., 275.], [325., 100., 435., 275.]]])
    color = torch.tensor([255., 0., 0.])
    
    x_out = K.utils.draw_rectangle(x_rgb, rects, color)
    

    image

    See full example here: https://colab.research.google.com/drive/1me_DxgMvsHIheLh-Pao7rmrsafKO5Lg3?usp=sharing

    More user contrib

    Infrastructure

    • Update CI to pytorch 1.7.x and 1.8.0 @edgarriba
    • Improve testing matrix with different versions
    • TPU support @edgarriba @shijianjian
    • Better JIT support @edgarriba @shijianjian @ducha-aiki
    • Improved and test docs @shijianjian @edgarriba

    Deprecations

    • Deprecated kornia.geometry.warp module.
      • DepthWarper is now in kornia.geometry.depth
      • HomographyWarper and related functions are now inside kornia.geometry.transform.
    • Deprecated kornia.contrib module.
      • max_pool_blurd2d is now in kornia.filters
    • Dropped support of Pytorch 1.5.1 #854

    Warp and Crop

    We refactored the interface of the functions warp_perspective, warp_affine, center_crop, crop_and_resize and crop_by_boxes in order to expose to the user the needed parameters by grid_sample [mode, padding_mode, align_corners]. #896

    The param align_corners has been set by default to None that maps to True in case the user does not specify. This comes from the motivation to match the behavior of the warping functions with OpenCV.

    Example of warp_perspective:

    def warp_perspective(src: torch.Tensor, M: torch.Tensor, dsize: Tuple[int, int],
                         mode: str = 'bilinear', padding_mode: str = 'zeros',
                         align_corners: Optional[bool] = None) -> torch.Tensor:
    

    Please review the full release notes here: https://github.com/kornia/kornia/blob/master/CHANGELOG.md

    Thanks to all our contributors !!! :tada: :sunglasses:

    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Oct 20, 2020)

    Kornia 0.4.1 release

    Highlights

    We include new features for 3D augmentations:

    • RandomCrop3D
    • CenterCrop3D
    • RandomMotionBlur3D
    • RandomEqualize3D

    Few more core functionalities to work on 3D volumetric tensors:

    • warp_affine3d
    • warp_perspective3d
    • get_perspective_transform3d
    • crop_by_boxes3d
    • motion_blur3d
    • equalize3d
    • warp_grid3d

    Details changes

    Added

    • Update docs for get_affine_matrix2d and get_affine_matrix3d (#618)
    • Added docs for solarize, posterize, sharpness, equalize (#623)
    • Added tensor device conversion for solarize params (#624)
    • Added rescale functional and transformation (#631)
    • Added Mixup data augmentation (#609)
    • Added equalize3d (#639)
    • Added decompose 3x4projection matrix (#650)
    • Added normalize_min_max functionality (#684)
    • Added random equalize3d (#653)
    • Added 3D motion blur (#713)
    • Added 3D volumetric crop implementation (#689)
      • warp_affine3d
      • warp_perspective3d
      • get_perspective_transform3d
      • crop_by_boxes3d
      • warp_grid3d

    Changed

    • Replace convolution with unfold in contrib.extract_tensor_patches (#626)
    • Updates Affine scale with non-isotropic values (#646)
    • Enabled param p for each augmentation (#664)
    • Enabled RandomResizedCrop batch mode when same_on_batch=False (#683)
    • Increase speed of transform_points (#687)
    • Improves find_homography_dlt performance improvement and weights params made optional (#690)
    • Enable variable side resizing in kornia.resize (#628)
    • Added Affine transformation as nn.Module (#630)
    • Accelerate augmentations (#708)

    Fixed

    • Fixed error in normal_transform_pixel3d (#621)
    • Fixed pipelining multiple augmentations return wrong transformation matrix (#645)(645)
    • Fixed flipping returns wrong transformation matrices (#648)
    • Fixed 3d augmentations return wrong transformation matrix (#665)
    • Fix the SOSNet loading bug (#668)
    • Fix/random perspective returns wrong transformation matrix (#667)
    • Fixes Zca inverse transform (#695)
    • Fixes Affine scale bug (#714)

    Removed

    • Removed warp_projective (#689)

    Contributors

    @gaurav104 @shijianjian @mshalvagal @pmeier @ducha-aiki @qxcv @FGeri @vribeiro1 @ChetanPatil28 @alopezgit @jatentaki @dkoguciuk @ceroytres @ag14774

    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Aug 6, 2020)

    Kornia 0.4.0 release

    In this release we are including the following main features:

    • Support to PyTorch v1.6.0.
    • Local descriptors matching, homography and epipolar geometry API.
    • 3D augmentations and low level API to work with volumetric data.

    kornia_medical

    Highlights

    Local features matching

    We include an kornia.feature.matching API to perform local descriptors matching such classical and derived version of the nearest neighbour (NN).

    import torch
    import kornia as K
    
    desc1 = torch.rand(2500, 128)
    desc2 = torch.rand(2500, 128)
    
    dists, idxs = K.feature.matching.match_nn(desc1, desc2)  # 2500 / 2500x2
    

    Homography and epipolar geometry

    We also introduce kornia.geometry.homography including different functionalities to work with homographies and differentiable estimators based on the DLT formulation and the iteratively-reweighted least squares (IRWLS).

    
    import torch
    import kornia as K
    
    pts1 = torch.rand(1, 8, 2)
    pts2 = torch.rand(1, 8, 2)
    H = K.find_homography_dlt(pts1, pts2, weights=torch.rand(1, 8))  # 1x3x3
    

    In addition, we have ported some of the existing algorithms from opencv.sfm to PyTorch under kornia.geometry.epipolar that includes different functionalities to work with Fundamental, Essential or Projection matrices, and Triangulation methods useful for Structure from Motion problems.

    3D augmentations and volumetric

    We expand the kornia.augmentaion with a series of operators to perform 3D augmentations for volumetric data BxCxDxHxW. In this release, we include the following first set of geometric 3D augmentations methods:

    • RandomDepthicalFlip3D (along depth axis)
    • RandomVerticalFlip3D (along height axis)
    • RandomHorizontalFlip3D (along width axis)
    • RandomRotation3D
    • RandomAffine3D

    The API for 3D augmentation work same as with 2D image augmentations:

    import torch
    import kornia as K
    
    x = torch.eye(3).repeat(3, 1, 1)
    aug = K.augmentation.RandomVerticalFlip3D(p=1.0)
    
    print(aug(x))
    
    tensor([[[[[0., 0., 1.],
                [0., 1., 0.],
                [1., 0., 0.]],
    <BLANKLINE>
                [[0., 0., 1.],
                [0., 1., 0.],
                [1., 0., 0.]],
    <BLANKLINE>
                [[0., 0., 1.],
                [0., 1., 0.],
                [1., 0., 0.]]]]])
    

    Finally, we introduce also a low level API to perform 4D features transformations kornia.warp_projective and extending the filtering operators to support 3D kernels kornia.filter3D.

    More 2d operators

    We expand as well the list of the 2D image augmentations based on the paper AutoAugment: Learning Augmentation Policies from Data.

    • Solarize
    • Posterize
    • Sharpness
    • Equalize
    • RandomSolarize
    • RandomPosterize
    • RandomShaprness
    • RandomEqualize

    Improvements

    • add zca whitening (#458)
    • add epipolar geometry package (#569)
    • Jit warp perspective (#574)
    • Autoaugment functions. (#571)
    • Dog and fix features (#591)
    • implement filter3D (#575)
    • Implement warp_projective (#587)
    • Feature matching and H/F/E estimation for SFM (#552)
    • 3D augmentations (#592)

    Breaking changes

    • Create kornia.enhance submodule (#614) -> see details in here

    Bugs/Fixes

    • fixed affine 2d shearing matrix translations (#612)
    • Now SIFTdesc throws and exception when the input parameters are incompatible (#598)
    • back to group conv backend for filter2d (#600)
    • updates sosnet git paths (#606)

    Docs

    • Updated doc & example for augmentation (#583)
    • fix Tversky equation (#579)
    • clean docs warnings (#604)
    • add kornia.geometry.homography docs (#608)
    • create kornia.geometry.subpix (#610)

    Dev

    • improve conftest fixtures and remove device, dtype imports (#568)
    • pin versions for pytest plugins and fix flake8 issues (#580)
    • made kornia versions explicit to pytorch version (#597)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.2(Aug 6, 2020)

  • v0.3.1(May 10, 2020)

    Kornia 0.3.1 release

    This release mainly introduces the following items:

    • Add support to Python 3.8

    • Exposes and fixes issues around align_corners.

    • Improve testing infrastructure adding parametrize for different devices and dtype and flake8/mypy support throw pytest by caching intermediate results. Test usage example:

      pytest -v --device cpu,cuda --dtype float16,float32,float64 --flake8 --mypy

    Improvements

    • Update to python 3.8 (#550)
    • Improve testing framework (#560)
    • Local feature fixes and nms improvements (#545)
    • Random motion blur improvments (#562)

    Fixes

    • Expose align_corners everywhere, where interpolation occurs (#546)
    • Soft-argmax test fixes, renaming and enables jit (#553)

    Bugs

    • Fix tests in TestSpatialSoftArgmax2d (#544)

    Docs

    • Updated docstring for augmentation module (#554)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Apr 27, 2020)

    Kornia 0.3.0 release

    Today we released 0.3.0 which aligns with PyTorch releases cycle and includes:

    • Full support to PyTorch v1.5.
    • Semi-automated GPU tests coverage.
    • Documentation has been reorganized [docs]
    • Data augmentation API compatible with torchvision v0.6.0.
    • Well integration with ecosystem e.g. Pytorch-Lightning.

    For more detailed changes check out v0.2.1 and v0.2.2.

    Highlights

    Data Augmentation

    We provide kornia.augmentation a high-level framework that implements kornia-core functionalities and is fully compatible with torchvision supporting batched mode, multi device cpu, gpu, and xla/tpu (comming), auto differentiable and able to retrieve (and chain) applied geometric transforms. To check how to reproduce torchvision in kornia refer to this Colab: Kornia vs. Torchvision @shijianjian

    import kornia as K
    import torchvision as T
    
    # kornia
    
    transform_fcn = torch.nn.Sequential(
      K.augmentation.RandomAffine(
        [-45., 45.], [0., 0.5], [0.5, 1.5], [0., 0.5], return_transform=True),
      K.color.Normalize(0.1307, 0.3081),
    )
    
    # torchvision
    
    transform_fcn = T.transforms.Compose([
      T.transforms.RandomAffine(
        [-45., 45.], [0., 0.5], [0.5, 1.5], [0., 0.5]),
      T.transforms.ToTensor(),
      T.transforms.Normalize((0.1307,), (0.3081,)),
    ])
    

    Ecosystem compatibility

    Kornia has been designed to be very flexible in order to be integrated in other existing frameworks. See the example below about how easy you can define a custom data augmentation pipeline to later be integrated into any training framework such as Pytorch-Lighting. We provide examples in [here] and [here].

    class DataAugmentatonPipeline(nn.Module):
        """Module to perform data augmentation using Kornia on torch tensors."""
        def __init__(self, apply_color_jitter: bool = False) -> None:
            super().__init__()
            self._apply_color_jitter = apply_color_jitter
    
            self._max_val: float = 1024.
    
            self.transforms = nn.Sequential(
                K.augmentation.Normalize(0., self._max_val),
                K.augmentation.RandomHorizontalFlip(p=0.5)
            )
    
            self.jitter = K.augmentation.ColorJitter(0.5, 0.5, 0.5, 0.5)
    
        @torch.no_grad()  # disable gradients for effiency
        def forward(self, x: torch.Tensor) -> torch.Tensor:
            x_out = self.transforms(x)
            if self._apply_color_jitter:
                x_out = self.jitter(x_out)
            return x_out
    

    GPU tests

    Now easy to run GPU tests with pytest --typetest cuda

    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Apr 26, 2020)

    Kornia 0.2.2 Release Notes

    This release is a checkpoint with minimum data augmentation API stability plus fixing some GPU tests before kornia upgrades to PyTorch v.1.5.0.

    • API changes
    • Improvements
    • Bug Fixes
    • Documentation

    API changes

    • Decoupled return_transform from apply_* function (#534)

    Improvements

    • improve setup packaging and build manywheel script (#543)

    Bug Fixes

    • fix broken gpu tests (#538)
    • update sosnet urls (#541)

    Documentation

    • reorganises color docs and adds ycbcr (#540)
    • reorganise documenation in subsections (#542)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Apr 21, 2020)

    Kornia 0.2.1 Release Notes

    • Highlights
    • API changes
    • New Features
    • Improvements
    • Bug Fixes
    • Performance

    Highlights

    In this release we support compatibility between kornia.augmentation and torchvision.transforms.

    We now support all the same existing operations with torch.Tensor in the GPU with extra features such as returning for each operator the transformation matrix generated to produce such transformation.

    import kornia as K
    import torchvision as T
    
    # kornia
    
    transform_fcn = torch.nn.Sequential(
      K.augmentation.RandomAffine(
        [-45., 45.], [0., 0.5], [0.5, 1.5], [0., 0.5], return_transform=True),
      K.color.Normalize(0.1307, 0.3081),
    )
    
    # torchvision
    
    transform_fcn = T.transforms.Compose([
      T.transforms.RandomAffine(
        [-45., 45.], [0., 0.5], [0.5, 1.5], [0., 0.5]),
      T.transforms.ToTensor(),
      T.transforms.Normalize((0.1307,), (0.3081,)),
    ])
    

    Check the online documentations with the updated API [DOCS]

    Check this Google Colab to see how to reproduce same results [Colab]

    kornia.augmentation as a framework

    In addition, we have re-designed kornia.augmentation such in a way that users can easily contribute with more operators, or just use it as a framework to create their custom operators.

    Each of the kornia.augmentation modules inherit from AugmentationBase and one can easily define a new operator by creating a subclass and overriding a couple of methods.

    Let's take a look at a custom MyRandomRotation . The class inherits from AugmentationBase making it a nn.Module so that can be stacked in a nn.Sequential to compute chained transformations.

    To implement a new functionality two things needed: override get_params and apply

    The get_params receives the shape of the input tensor and returns a dictionary with the parameters to use in the apply function.

    The applyfunction receives as input a tensor and the dictionary defined in get_params; and returns a tuple with the transformed input and the transformation applied to it.

    class MyRandomRotation(AugmentationBase):
        def __init__(self, angle: float, return_transform: bool = True) -> None:
          super(MyRandomRotation, self).__init__(self.apply, return_transform)
          self.angle = angle
    
        def get_params(self, batch_shape: torch.Size) -> Dict[str, torch.Tensor]:
          angles_rad torch.Tensor = torch.rand(batch_shape) * K.pi
          angles_deg = kornia.rad2deg(angles_rad) * self.angle
          return dict(angles=angles_deg)
    
        def apply(self, input: torch.Tensor, params: Dict[str, torch.Tensor]):
          # compute transformation
          angles: torch.Tensor = params['angles'].type_as(input)
          center = torch.tensor([[W / 2, H / 2]]).type_as(input)
          transform = K.get_rotation_matrix2d(
            center, angles, torch.ones_like(angles))
    
          # apply transformation
          output = K.warp_affine(input, transform, (H, W))
    
          return (output, transform)
    
    # how to use it
    
    # load an image and cast to tensor
    img1: torch.Tensor = imread(...)  # BxDxHxW
    
    # instantiate and apply the transform
    aug = MyRandomRotation(45., return_transformation=True)
    
    img2, transform = aug(img1)  # BxDxHxW - Bx3x3
    

    New Features

    kornia.color

    • Implement RGB to XYZ (#436)
    • Implement RGB to LUV (#442)
    • Implement histogramd2 (#530)

    kornia.feature

    • Implement hardnet descriptor (#498)
    • Implement deep descriptor sosnet (#521)

    kornia.jit

    • Create kornia.jit module and exposes rgb_to_grayscale (#261)

    API Changes

    • Remove PIL dependency (#512)
    • Remove float casting in image_to_tensor (#497)

    Improvements

    • Adds gradcheck for RandomCrop and RandomResizedCrop (#439)
    • Update spatial_soft_argmax.py (#496)
    • Add epsilon value to make hessian matrix robust (#504)
    • Add normalize_points flag in depth to 3d (#511)
    • Functional augmentation performance test against Torchvision (#482)
    • AffineTransformation alignment and other fixes (#514)

    Performance

    • Filter speed up conditional (#433)
      • Improves by far the time performance for filtering.
    • Speed-up warp_affine and fix bugs in RandomAffine (#474)
    • Improve homography warper (#528)

    Docs

    • Make link work for PSNRLoss (#449)
    • Change psnr to psnr_loss in docs (#450)
    • Fix import problem and fix docs for LuvToRgb and PSNR (#447)
    • Fix outdated example (#465)
    • Update color_adjust.py (#479)
    • Missing commas in bibtex (#500)

    Bug fixes

    • Fix device problem in test (#456)
    • Bug fixed in device tests (#475)
    • Add epsilon value to sobel to improve backprop stability (#513)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jan 27, 2020)

    Kornia 0.2.0 Release Notes

    • Highlights
    • New Features
      • kornia.color
      • kornia.feature
      • kornia.geometry
      • kornia.losses
    • Improvements
    • Bug Fixes

    Kornia v0.2.0 release is now available.

    The release contains over 50 commits and updates support to PyTorch 1.4. This is the result of a huge effort in the desing of the new data augmentation module, improvements in the set of the color space conversion algorithms and a refactor of the testing framework that allows to test the library using the cuda backend.

    Highlights

    Data Augmentation API

    From this point forward, we will give support to the new data augmentation API. The kornia.augmentation module mimics the best of the existing data augmentation frameworks such torchvision or albumentations all re-implemented assuming as input torch.Tensor data structures that will allowing to run the standard transformations (geometric and color) in batch mode in the GPU and backprop through it.

    In addition, a very interesting feature we are very proud to include, is the ability to return the transformation matrix for each of the transform which will make easier to concatenate and optimize the transforms process.

    A quick overview of its usage:

    
    import torch
    import kornia
    
    input: torch.Tensor = load_tensor_data(....)  # BxCxHxW
    
    transforms = torch.nn.Sequential(
        kornia.augmentation.RandomGrayscale(),
        kornia.augmentation.RandomAffine(degrees=(-15, 15)),
    )
    
    out: torch.Tensor = transforms(input)         # CPU
    out: torch.Tensor = transforms(input.cuda())  # GPU
    
    # same returning the transformation matrix
    
    transforms = torch.nn.Sequential(
        kornia.augmentation.RandomGrayscale(return_transformation=True),
        kornia.augmentation.RandomAffine(degrees=(-15, 15), return_transformation=True),
    )
    
    out, transform = transforms(input) # BxCxHxW , Bx3x3
    

    This are the following features found we introduce in the module:

    • BaseAugmentation (#407)
    • ColorJitter (#329)
    • RandomHorizontalFlip (#309)
    • MotionBlur (#328)
    • RandomVerticalFlip (#375)
    • RandomErasing (#344)
    • RandomGrayscale (#384)
    • Resize (#394)
    • CenterCrop (#409)
    • RandomAffine (#403)
    • RandomPerspective (#403)
    • RandomRotation (#397, #418)
    • RandomCrop (#408)
    • RandomResizedCrop (#408)
    • Grayscale

    GPU Test

    We have refactored our testing framework and we can now easily integrate GPU tests within our library. At this moment, this features is only available to run locally but very soon we will integrate with CircleCI and AWS infrastructure so that we can automate the process.

    From root one just have to run: make test-gpu

    Tests look like this:

    import torch
    from test.common import device
    
    def test_rgb_to_grayscale(self, device):
            channels, height, width = 3, 4, 5
            img = torch.ones(channels, height, width).to(device)
            assert kornia.rgb_to_grayscale(img).shape == (1, height, width)
    

    Ref PR:

    • parametrize test functions to accept torch.device cpu/cuda @edgarriba @ducha-aiki da793cd 0fcb85e

    New Features

    kornia.color

    We have added few more algorithms for color space conversion:

    • rgb_to_hsv (#299)
    • rgb_to_hls (#342)
    • rgb_to_ycbcr (#345)
    • ycbcr_to_rgb (#345)
    • rgb_to_yuv (#337)
    • yuv_to_rgb (#337)
    • rgb_to_rgba (#401)
    • rgba_to_rgb (#401)
    • bgr_to_bgra (#401)
    • bgra_to_bgr (#401)
    • bgr_to_gray (#266)
    • add_weighted (#295)

    kornia.geometry

    • Implement kornia.hflip, kornia.vflip and kornia.rot180 (#268)
    • Implement kornia.transform_boxes (#368)

    kornia.losses

    • Implements to total_variation loss (#250)
    • Implement PSNR loss (#272)

    kornia.feature

    • Added convenience functions for work with LAF: get keypoint, orientation (#340)

    Improvements

    • Fixed conv_argmax2d/3d behaviour for even-size kernel and added test (#227)
    • Normalize accepts floats and allows broadcast over channel dimension (#236)
    • Single value support for normalize function (#301)
    • Added boundary check function to local features detector (#254)
    • Correct crop_and_resize on aspect ratio changes. (#305)
    • Correct adjust brightness and contrast (#304)
    • Add tensor support to Hue, Saturation and Gamma (#324)
    • Double image option for scale pyramid (#351)
    • Filter2d speedup for older GPUs (#356)
    • Fix meshgrid3d function (#357)
    • Added support for even-sized filters in filter2d (#374)
    • Use latest version of CircleCI (#373)
    • Infer border and padding mode to homography warper (#379)
    • Apply normalization trick to conv_softmax (#383)
    • Better nms (#371)
      • added spatial gradient 3d
      • added hardnms3d and tests for hardnms 2d
      • quadratic nms interp
      • update the tests because of changed gaussian blur kernel size in scale pyramid calculation
      • no grad for spatial grad
    • Focal loss flat (#393)
    • Add optional mask parameter in scale space (#389)
    • Update to PyTorch 1.4 (#402)

    Bug fixes

    • Add from homogeneous zero grad test and fix it (#369)
    • Filter2d failed with noncontiguous input (view --> reshape) (#377)
    • Add ceil_mode to maxblur pool to be able to be used in resnets (#395)

    Breaking Changes

    • crop_and_resize before: "The tensor must have the shape of Bx4x2, where each box is defined in the following order: top-left, top-right, bottom-left and bottom-right. The coordinates order must be in y, x respectively" after: "The tensor must have the shape of Bx4x2, where each box is defined in the following (clockwise) order: top-left, top-right, bottom-right and bottom-left. The coordinates must be in the x, y order."

    As usual, thanks to the community to keep this project growing. Happy coding ! :sunrise_over_mountains:

    Source code(tar.gz)
    Source code(zip)
  • v0.1.4(Oct 5, 2019)

    Table of Contents

    We have just released Kornia: a differentiable computer vision library for PyTorch.

    It consists of a set of routines and differentiable modules to solve generic computer vision problems. At its core, the package uses PyTorch as its main backend both for efficiency and to take advantage of the reverse-mode auto-differentiation to define and compute the gradient of complex functions.

    Inspired by OpenCV, this library is composed by a subset of packages containing operators that can be inserted within neural networks to train models to perform image transformations, epipolar geometry, depth estimation, and low level image processing such as filtering and edge detection that operate directly on tensors.

    It has over 300 commits and majorly refactors the whole library including over than 100 functions to solve generic Computer Vision problems.

    Highlights

    Version 0.1.4 includes a reorganization of the internal API grouping functionalities that consists of the following components:

    • kornia | a Differentiable Computer Vision library like OpenCV, with strong GPU support.
    • kornia.color | a set of routines to perform color space conversions.
    • kornia.contrib | a compilation of user contrib and experimental operators.
    • kornia.feature | a module to perform local feature detection.
    • kornia.filters | a module to perform image filtering and edge detection.
    • kornia.geometry | a geometric computer vision library to perform image transformations, 3D linear algebra and conversions using differen camera models.
    • kornia.losses | a stack of loss functions to solve different vision tasks.
    • kornia.utils | image to tensor utilities and metrics for vision problems.

    Big contribution in kornia.features:

    • Implemented anti-aliased local patch extraction.
    • Implemented classical local features cornerness functions: Harris, Hessian, Good Features To Track.
    • Implemented basic functions for work with local affine features and their patches.
    • Implemented convolutional soft argmax 2d and 3d operators for differentable non-maxima suppression.
    • implemented second moment matrix affine shape estimation, dominant gradient orientation and SIFT patch descriptor.

    Infrastructure

    • Migration to CircleCI towards GPU testing 30099b4b0481f4151b893f77d7bf297ee47d268b
    • Added Code of Conduct file 5e0848a2d41780d632632afb81e0371e9dca6a33
    • Redefined the CONTRIBUTING notes
    • Enforce python minimal Python 3.6 usage 34e21e5f81a0376fc8bb45da52003f20a101d591
    • Creation of an external repo to host the website www.kornia.org.

    Breaking Changes

    • Removed nms and normalization from Harris response function 2209807ce8db8fe82eabbe6ca51e6370beea2934
    • Renames GaussianBlur -> GaussianBlur2d and added an input to specify pad b0c522e60ef4c82a3d1881dd5901a25d7a4a02c5
    • Chaneged batch support for tensor2img and img2tensor 705a82f1ca308087b7d62d8ce452cde1aeeaabae
    • Fixed torch.clamp for homogeneous division 506b0c98ed245373544732dd49fc4612d7075501

    New Features

    • Several functionalities for Local Affine Frame (LAF): extract_patches_from_pyramid, extract_patches_simple, normalize_laf, ellipse_to_laf, make_upright, scale_laf, get_laf_scale 0a3cbb02850ac78059e0615da93144b5a64d3330
    • Differentiable SIFT descriptor 7f0eb809f1509c452d85000fd002b12c22e358ca
    • Added implementation of the differentiable spatial to numerical (DSNT) layer and related operations. abb4afabe1a37082e8938cfe7f227e57042d9803
    • Spatial gradient 1d 362adfc1af06e0abd945e84bf00c5b8a437f3aa3
    • Scale pyramid for local features detection 413051eb4c1b36fe3548a65cee9ab2d8ba45086f
    • Added geometry.depth submodule including: depth_to_3d, depth_to_normals, warp_frame_depth d1dedb8d37f99b752467ed4acaf4f767afbbad49
    • Implement Gaussian Pyramid bc586cb4bf8454d33fed721bc6f045767191374e
    • Implement Filter2D to apply arbitrary depthwise 2d kernels 94b56f2d43ed87a259aca3e6313d0f7a1222baf5
    • Implement to save/load pointclouds 4f32351c0dfd2d0d1779e9eb1d0028e3d3b904ab
    • implement project_points 636f4f5338e4fc1b6d32140c6f1febae3b64eb96
    • Implement unproject_points b02f403feaf1fdeb574fb87e1d70157ec0b4dbff
    • Implement denormalize_coordinates b07ec45410f45469bad2067ce03b83dddcabb7c0
    • Implement harris_corner detector 977a1f6a8c7beef9c339fdd695032dec2705c7d3
    • Implement non_maxima_suppression_2d 84cc1287fcd9df2a437a2d25a61f171097047a76
    • Implement median_filter 6b6cf0543028dcf3bfb25a0ae9104e6ade26037e
    • Implement blur_filter d4c8df933570fa95546e84517a6d676e302e6e7d
    • Implement sobel_filter operator 9abe4c5afdbe486baadf07b427ad5468d57da603
    • Implement max_blur_pool_2d 621be3b59055f000896c45fe33a28fa3ca680841
    • Implement pyrup and pyrdown a4e110cd47dd6c7792751fb7294d068b7655486a
    • Implemen crop_and_resize 41b4fed573c37c7310e4d7e03b73a54bce1eb2ab
    • Implement center_crop b1188d50f7ecae001832e05e606ca55d0d630ae6
    • Implement inverse_affine_matrix 6e10fb9a0859ef35f82b6e2dfd58af828bda7a8c
    • Implement opencv like remap function b0401deac4b54e201095705ec8c18eabe943cd2b
    • Implement affine ceb3faf3b89596ba23bdc7e0f616b218edf997df
    • Implement shear 81c5a2798f00663ee64ff74db87340daa6edb08d
    • Implement scale 75a84a373e9ce142fb4a1ac0d7fde8f3790b861c
    • Implement translate 11af4dde591258e057d1973bb00529f49fa6d63f
    • Implement rotate 89c6d964c5a18254adf73a5f8da00d8a5068e7bc
    • Implement Laplacian filter 5e3a89a2e630ae9d0199ff388217e2d5a11c4f86
    • Implement rgb_to_gray 9a2bea6057f4cf99eb6c16c96d5a1c952d95b4b2
    • Implement vectorised confusion_matrix f30606209f20f9f2d879b2eaec80215cc274a80a
    • Implement normalization on tensors 4c3f8fa52d3b9d86843716a99d5c833e80929212
    • Implement rgb_to_bgr e25f6a4900ede8786a0eee38f58f4ffd0908535a
    • Implement hsv_to_rgb 9726872019d71c3b9a3e7cabaf51e77a96220a45
    • Implement adjust_brightness b8fd8b6bce1707ea8a0b2fd5ba9498fe10d586b8

    Bug Fixes

    • Normalize filtering functions kernels a301e3cf6192aff4cbbadda979cc48b17504684f
    • Fix the bug in spatial gradient with padding 5635e45830461d9f40da68a3318755d50a425b17
    • Disable JIT tests 464931720d7b2609ca25f95f29b7a47ba5af2e2f
    • Fix pyrdown with avg_pool2d b83514302232cf8bc31c30f3981a168dd7b55e39
    • Fix formulation issue in rotation_matrix_to_quaternion 58c6e8e7038ad1ca4d9051e04b54b6a42fd72a74
    • Switch torch.gesv -> torch.solve in get_perspective_transform c347a41e85eae78d73ea821b06623383d7a142a4
    • Fix and refactor test_warp_perspective d19121effb69d4c17d53f6bea010941cb7730f32
    • Fixed and updated the quaternion related docs to reflect quaternions 0161f65831ab9f975575586c4c1b1aec6e8a6b11
    • Remove some unused test functions a64a8fb80e5e666caefc806325a2c338ee50f81f

    Contributors

    • @anibali
    • @carlosb1
    • @ducha-aiki
    • @dvd42
    • @edgarriba
    • @jiangwei221
    • @priba
    • @varunagrawal
    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Mar 14, 2019)

    Package

    • Migrated the project to Arraiy Open Source Organization: https://github.com/arraiyopensource/torchgeometry. 48ad11f39f69be95fe35c164414ad58e0034f5d4
    • Update with support of PyTorch v1.0.1. In fact, we test each time against nightly builds. 5c9d9ae1ccf13fc2381d62be6f5b4c81c265608b
    • Fix issue with pip package PyTorch minimal version. Now we require at least v1.0.0. 6e16734d68074f22fb67a8b1c4418e4917e5b1f1
    • Package version file is auto-generate and too keep tracked sha. f337b3c131b2482a961f5aa895a9b344814bff5a
    • Added codecov support to keep tracked tested code. e609b2112f25806d02364681d57a29b977950f59

    Breaking Changes

    • Refactor DepthWarper API - now accepts PinholeCamera objects as parameters:
    >>> # pinholes camera models
    >>> pinhole_dst = tgm.PinholeCamera(...)
    >>> pinhole_src = tgm.PinholeCamera(...)
    >>> # create the depth warper, compute the projection matrix
    >>> warper = tgm.DepthWarper(pinhole_dst, height, width)
    >>> warper.compute_projection_matrix(pinhole_src)
    >>> # warp the destionation frame to reference by depth
    >>> depth_src = torch.ones(1, 1, 32, 32)  # Nx1xHxW
    >>> image_dst = torch.rand(1, 3, 32, 32)  # NxCxHxW
    >>> image_src = warper(depth_src, image_dst)  # NxCxHxW
    

    New Features

    • Added new PinholeCamera API to represent pinhole camera models. b6ec592bc4d00ba942d0f3d2085534acdefd783f pinhole_model
    • Refactor and moved code from conversions.py and created a dedicated module for linear transforms transformations.py. a1c25b1c5e3a5ac4e5109ea0e1b7256ba8e4ee56
      • boxplus_transformation, boxminus_transformation, inverse_transformation, transform_points.
    • Added a collection of losses:
      • Image: SSIM f08812168984174d6054c5b21298963cdf421cd8
      • Depth: InverseDepthSmoothnessLoss 42a1d22df0691444664c182eae7fc10acaa428cc
      • Semantic segmentation:
        • Diceloss 9b0fddf9055cb9a948856d087b52073551c44129
        • TerskyLoss 89246d269739f89ed0731f52ff543863882efa48
        • FocalLoss ffe4cb1b74ecb81baef05f97fe6c62f176336fd7
    • Added SpatialSoftArgmax2d operator to extract 2D coordinates from probability maps. cf7bb292dbe19242e0b207a8747da601a27e4cf3
    • Added extract_tensor_patches routine similar to tf.extract_image_patches but for multidimensional tensors instead of images. f60fa57b4dcf9462e443ee71bf571cc6e31a7939
    • Added boxplus_transform and boxminus_transform to compose or compute relative pose functions. e0882ea32bb13e62275b678ddb60915058397d35

    Bug Fixes

    • Fixed DepthWarper in order to accept mini-batch computation. 7175b4f93f5cb855eb8ab4011c33e79cb32bf3fa
    • Added missing tests for warp_affine. 57cbd29aa291dc0cf60e6cff6b0665c91db39330
    • Fixed and refactored quaternion_to_axis_angle and axis_angle_to_quaternion to avoid nans. 4aa0bca9cd2ab95b3edb7d043ff16d473b2e04b7

    Test

    • Updated code with python typing: https://docs.python.org/3/library/typing.html to perform static analysis tests using MyPy: http://mypy-lang.org 3c02b58e1d4de3e7583020c332a7b982c9e97d74
    • Added pytest fixtures to split between CPU/CUDA tests. Additionally, we added Makefile commands to launch the tests. 1aff7f65d6535e88abc0d5383846be75d37e4af9
    • Improved InversePose tests. d6a508c7600e5e464d6910028f7771f8d18fe722
    • Improved HomographyWarper tests. 43bd8c2ea669d89d57a063998b491be18d2ab39a

    contributors:

    • @edgarriba
    • @carlosb1
    • @Wizaron
    • @prlz77
    • @kajal-puri
    Source code(tar.gz)
    Source code(zip)
Owner
kornia
Computer Vision for PyTorch
kornia
Differentiable Neural Computers, Sparse Access Memory and Sparse Differentiable Neural Computers, for Pytorch

Differentiable Neural Computers and family, for Pytorch Includes: Differentiable Neural Computers (DNC) Sparse Access Memory (SAM) Sparse Differentiab

ixaxaar 302 Dec 14, 2022
Open source Python module for computer vision

About PCV PCV is a pure Python library for computer vision based on the book "Programming Computer Vision with Python" by Jan Erik Solem. More details

Jan Erik Solem 1.9k Jan 6, 2023
Neural Turing Machine (NTM) & Differentiable Neural Computer (DNC) with pytorch & visdom

Neural Turing Machine (NTM) & Differentiable Neural Computer (DNC) with pytorch & visdom Sample on-line plotting while training(avg loss)/testing(writ

Jingwei Zhang 269 Nov 15, 2022
Pytorch implementation of DeepMind's differentiable neural computer paper.

DNC pytorch This is a Pytorch implementation of DeepMind's Differentiable Neural Computer (DNC) architecture introduced in their recent Nature paper:

Yuanpu Xie 91 Nov 21, 2022
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

null 185 Dec 26, 2022
An Agnostic Computer Vision Framework - Pluggable to any Training Library: Fastai, Pytorch-Lightning with more to come

IceVision is the first agnostic computer vision framework to offer a curated collection with hundreds of high-quality pre-trained models from torchvision, MMLabs, and soon Pytorch Image Models. It orchestrates the end-to-end deep learning workflow allowing to train networks with easy-to-use robust high-performance libraries such as Pytorch-Lightning and Fastai

airctic 789 Dec 29, 2022
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 360 Dec 10, 2022
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 359 Jan 5, 2023
Hand Gesture Volume Control | Open CV | Computer Vision

Gesture Volume Control Hand Gesture Volume Control | Open CV | Computer Vision Use gesture control to change the volume of a computer. First we look i

Jhenil Parihar 3 Jun 15, 2022
This repository contains the source code of our work on designing efficient CNNs for computer vision

Efficient networks for Computer Vision This repo contains source code of our work on designing efficient networks for different computer vision tasks:

Sachin Mehta 386 Nov 26, 2022
Scenic: A Jax Library for Computer Vision and Beyond

Scenic Scenic is a codebase with a focus on research around attention-based models for computer vision. Scenic has been successfully used to develop c

Google Research 1.6k Dec 27, 2022
GluonMM is a library of transformer models for computer vision and multi-modality research

GluonMM is a library of transformer models for computer vision and multi-modality research. It contains reference implementations of widely adopted baseline models and also research work from Amazon Research.

null 42 Dec 2, 2022
CVNets: A library for training computer vision networks

CVNets: A library for training computer vision networks This repository contains the source code for training computer vision models. Specifically, it

Apple 1.1k Jan 3, 2023
PyTorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision.

PyTorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision @misc{CV2018, author = {Donny You ([email protected])}, howpubl

Donny You 40 Sep 14, 2022
Build fully-functioning computer vision models with PyTorch

Detecto is a Python package that allows you to build fully-functioning computer vision and object detection models with just 5 lines of code. Inferenc

Alan Bi 576 Dec 29, 2022
A PyTorch-Based Framework for Deep Learning in Computer Vision

TorchCV: A PyTorch-Based Framework for Deep Learning in Computer Vision @misc{you2019torchcv, author = {Ansheng You and Xiangtai Li and Zhen Zhu a

Donny You 2.2k Jan 9, 2023
Pytorch implementation of the DeepDream computer vision algorithm

deep-dream-in-pytorch Pytorch (https://github.com/pytorch/pytorch) implementation of the deep dream (https://en.wikipedia.org/wiki/DeepDream) computer

null 102 Dec 5, 2022
Spiking Neural Network for Computer Vision using SpikingJelly framework and Pytorch-Lightning

Spiking Neural Network for Computer Vision using SpikingJelly framework and Pytorch-Lightning

Sami BARCHID 2 Oct 20, 2022
An open source bike computer based on Raspberry Pi Zero (W, WH) with GPS and ANT+. Including offline map and navigation.

Pi Zero Bikecomputer An open-source bike computer based on Raspberry Pi Zero (W, WH) with GPS and ANT+ https://github.com/hishizuka/pizero_bikecompute

hishizuka 264 Jan 2, 2023