Tree Nested PyTorch Tensor Lib

Overview

DI-treetensor

PyPI PyPI - Python Version Loc Comments

Docs Deploy Code Test Badge Creation Package Release codecov

GitHub stars GitHub forks GitHub commit activity GitHub issues GitHub pulls Contributors GitHub license

treetensor is a generalized tree-based tensor structure mainly developed by OpenDILab Contributors.

Almost all the operation can be supported in form of trees in a convenient way to simplify the structure processing when the calculation is tree-based.

Installation

You can simply install it with pip command line from the official PyPI site.

pip install di-treetensor

For more information about installation, you can refer to Installation.

Documentation

The detailed documentation are hosted on https://opendilab.github.io/DI-treetensor.

Only english version is provided now, the chinese documentation is still under development.

Quick Start

You can easily create a tree value object based on FastTreeValue.

import builtins
import os
from functools import partial

import treetensor.torch as torch

print = partial(builtins.print, sep=os.linesep)

if __name__ == '__main__':
    # create a tree tensor
    t = torch.randn({'a': (2, 3), 'b': {'x': (3, 4)}})
    print(t)
    print(torch.randn(4, 5))  # create a normal tensor
    print()

    # structure of tree
    print('Structure of tree')
    print('t.a:', t.a)  # t.a is a native tensor
    print('t.b:', t.b)  # t.b is a tree tensor
    print('t.b.x', t.b.x)  # t.b.x is a native tensor
    print()

    # math calculations
    print('Math calculation')
    print('t ** 2:', t ** 2)
    print('torch.sin(t).cos()', torch.sin(t).cos())
    print()

    # backward calculation
    print('Backward calculation')
    t.requires_grad_(True)
    t.std().arctan().backward()
    print('grad of t:', t.grad)
    print()

    # native operation
    # all the ops can be used as the original usage of `torch`
    print('Native operation')
    print('torch.sin(t.a)', torch.sin(t.a))  # sin of native tensor

The result should be

<Tensor 0x7f0dae602760>
├── a --> tensor([[-1.2672, -1.5817, -0.3141],
│                 [ 1.8107, -0.1023,  0.0940]])
└── b --> <Tensor 0x7f0dae602820>
    └── x --> tensor([[ 1.2224, -0.3445, -0.9980, -0.4085],
                      [ 1.5956,  0.8825, -0.5702, -0.2247],
                      [ 0.9235,  0.4538,  0.8775, -0.2642]])

tensor([[-0.9559,  0.7684,  0.2682, -0.6419,  0.8637],
        [ 0.9526,  0.2927, -0.0591,  1.2804, -0.2455],
        [ 0.4699, -0.9998,  0.6324, -0.6885,  1.1488],
        [ 0.8920,  0.4401, -0.7785,  0.5931,  0.0435]])

Structure of tree
t.a:
tensor([[-1.2672, -1.5817, -0.3141],
        [ 1.8107, -0.1023,  0.0940]])
t.b:
<Tensor 0x7f0dae602820>
└── x --> tensor([[ 1.2224, -0.3445, -0.9980, -0.4085],
                  [ 1.5956,  0.8825, -0.5702, -0.2247],
                  [ 0.9235,  0.4538,  0.8775, -0.2642]])

t.b.x
tensor([[ 1.2224, -0.3445, -0.9980, -0.4085],
        [ 1.5956,  0.8825, -0.5702, -0.2247],
        [ 0.9235,  0.4538,  0.8775, -0.2642]])

Math calculation
t ** 2:
<Tensor 0x7f0dae602eb0>
├── a --> tensor([[1.6057, 2.5018, 0.0986],
│                 [3.2786, 0.0105, 0.0088]])
└── b --> <Tensor 0x7f0dae60c040>
    └── x --> tensor([[1.4943, 0.1187, 0.9960, 0.1669],
                      [2.5458, 0.7789, 0.3252, 0.0505],
                      [0.8528, 0.2059, 0.7699, 0.0698]])

torch.sin(t).cos()
<Tensor 0x7f0dae621910>
├── a --> tensor([[0.5782, 0.5404, 0.9527],
│                 [0.5642, 0.9948, 0.9956]])
└── b --> <Tensor 0x7f0dae6216a0>
    └── x --> tensor([[0.5898, 0.9435, 0.6672, 0.9221],
                      [0.5406, 0.7163, 0.8578, 0.9753],
                      [0.6983, 0.9054, 0.7185, 0.9661]])


Backward calculation
grad of t:
<Tensor 0x7f0dae60c400>
├── a --> tensor([[-0.0435, -0.0535, -0.0131],
│                 [ 0.0545, -0.0064, -0.0002]])
└── b --> <Tensor 0x7f0dae60cbe0>
    └── x --> tensor([[ 0.0357, -0.0141, -0.0349, -0.0162],
                      [ 0.0476,  0.0249, -0.0213, -0.0103],
                      [ 0.0262,  0.0113,  0.0248, -0.0116]])


Native operation
torch.sin(t.a)
tensor([[-0.9543, -0.9999, -0.3089],
        [ 0.9714, -0.1021,  0.0939]], grad_fn=<SinBackward>)

For more quick start explanation and further usage, take a look at:

Extension

If you need to translate treevalue object to runnable source code, you may use the potc-treevalue plugin with the installation command below

pip install DI-treetensor[potc]

In potc, you can translate the objects to runnable python source code, which can be loaded to objects afterwards by the python interpreter, like the following graph

potc_system

For more information, you can refer to

Contribution

We appreciate all contributions to improve DI-treetensor, both logic and system designs. Please refer to CONTRIBUTING.md for more guides.

And users can join our slack communication channel, or contact the core developer HansBug for more detailed discussion.

License

DI-treetensor released under the Apache 2.0 license.

You might also like...
 Pretty Tensor - Fluent Neural Networks in TensorFlow
Pretty Tensor - Fluent Neural Networks in TensorFlow

Pretty Tensor provides a high level builder API for TensorFlow. It provides thin wrappers on Tensors so that you can easily build multi-layer neural networks.

A torch.Tensor-like DataFrame library supporting multiple execution runtimes and Arrow as a common memory format

TorchArrow (Warning: Unstable Prototype) This is a prototype library currently under heavy development. It does not currently have stable releases, an

Gradient-free global optimization algorithm for multidimensional functions based on the low rank tensor train format

ttopt Description Gradient-free global optimization algorithm for multidimensional functions based on the low rank tensor train (TT) format and maximu

 (Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework
(Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework

(Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework Background: Outlier detection (OD) is a key data mining task for identify

Code to reproduce the results in the paper
Code to reproduce the results in the paper "Tensor Component Analysis for Interpreting the Latent Space of GANs".

Tensor Component Analysis for Interpreting the Latent Space of GANs [ paper | project page ] Code to reproduce the results in the paper "Tensor Compon

Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks

Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks This repository contains the code and data for the corresp

mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms.
mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms.

mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms. It provides easily interchangeable modeling and planning components, and a set of utility functions that allow writing model-based RL algorithms with only a few lines of code.

OpenDILab RL Kubernetes Custom Resource and Operator Lib

DI Orchestrator DI Orchestrator is designed to manage DI (Decision Intelligence) jobs using Kubernetes Custom Resource and Operator. Prerequisites A w

Jittor Medical Segmentation Lib -- The assignment of Pattern Recognition course (2021 Spring) in Tsinghua University
Jittor Medical Segmentation Lib -- The assignment of Pattern Recognition course (2021 Spring) in Tsinghua University

THU模式识别2021春 -- Jittor 医学图像分割 模型列表 本仓库收录了课程作业中同学们采用jittor框架实现的如下模型: UNet SegNet DeepLab V2 DANet EANet HarDNet及其改动HarDNet_alter PSPNet OCNet OCRNet DL

Comments
  • PyTorch OP List(P0)

    PyTorch OP List(P0)

    reference: https://pytorch.org/docs/1.8.0/torch.html

    common

    • [x] numel
    • [x] cpu
    • [x] cuda
    • [x] to

    Creation Ops

    • [x] torch.zeros_like
    • [x] torch.randn_like
    • [x] torch.randint_like
    • [x] torch.ones_like
    • [x] torch.full_like
    • [x] torch.empty_like
    • [x] torch.zeros
    • [x] torch.randn
    • [x] torch.randint
    • [x] torch.ones
    • [x] torch.full
    • [x] torch.empty

    Indexing, Slicing, Joining, Mutating Ops

    • [x] cat
    • [x] chunk
    • [ ] gather
    • [x] index_select
    • [x] masked_select
    • [x] reshape
    • [ ] scatter
    • [x] split
    • [x] squeeze
    • [x] stack
    • [ ] tile
    • [ ] unbind
    • [x] unsqueeze
    • [x] where

    Math Ops

    Pointwise Ops
    • [x] add
    • [x] sub
    • [x] mul
    • [x] div
    • [x] pow
    • [x] neg
    • [x] abs
    • [x] sign
    • [x] floor
    • [x] ceil
    • [x] round
    • [x] sigmoid
    • [x] clamp
    • [x] exp
    • [x] exp2
    • [x] sqrt
    • [x] log
    • [x] log10
    • [x] log2
    Reduction Ops
    • [ ] argmax
    • [ ] argmin
    • [x] all
    • [x] any
    • [x] max
    • [x] min
    • [x] dist
    • [ ] logsumexp
    • [x] mean
    • [ ] median
    • [x] norm
    • [ ] prod
    • [x] std
    • [x] sum
    • [ ] unique
    Comparison Ops
    • [ ] argsort
    • [x] eq
    • [x] ge
    • [x] gt
    • [x] isfinite
    • [x] isinf
    • [x] isnan
    • [x] le
    • [x] lt
    • [x] ne
    • [ ] sort
    • [ ] topk
    Other Ops
    • [ ] cdist
    • [x] clone
    • [ ] flip

    BLAS and LAPACK Ops

    • [ ] addbmm
    • [ ] addmm
    • [ ] bmm
    • [x] dot
    • [x] matmul
    • [x] mm
    enhancement 
    opened by PaParaZz1 3
  • PyTorch OP Doc List

    PyTorch OP Doc List

    P0

    • [x] cpu
    • [x] cuda
    • [x] to
    • [x] torch.zeros_like
    • [x] torch.randn_like
    • [x] torch.ones_like
    • [x] torch.zeros
    • [x] torch.randn
    • [x] torch.randint
    • [x] torch.ones
    • [x] cat
    • [x] reshape
    • [x] split
    • [x] squeeze
    • [x] stack
    • [x] unsqueeze
    • [x] where
    • [x] abs
    • [x] add
    • [x] clamp
    • [x] div
    • [x] exp
    • [x] log
    • [x] sqrt
    • [x] sub
    • [x] sigmoid
    • [x] pow
    • [x] mul
    • [ ] argmax
    • [ ] argmin
    • [x] all
    • [x] any
    • [x] max
    • [x] min
    • [x] dist
    • [x] mean
    • [x] std
    • [x] sum
    • [x] eq
    • [x] ge
    • [x] gt
    • [x] le
    • [x] lt
    • [x] ne
    • [x] clone
    • [x] dot
    • [x] matmul
    • [x] mm

    P1

    • [x] numel
    • [x] torch.randint_like
    • [x] torch.full_like
    • [x] torch.empty_like
    • [x] torch.full
    • [x] torch.empty
    • [x] chunk
    • [ ] gather
    • [x] index_select
    • [x] masked_select
    • [ ] scatter
    • [ ] tile
    • [ ] unbind
    • [x] ceil
    • [x] exp2
    • [x] floor
    • [x] log10
    • [x] log2
    • [x] neg
    • [x] round
    • [x] sign
    • [ ] bmm

    P2

    • [ ] logsumexp
    • [ ] median
    • [x] norm
    • [ ] prod
    • [ ] unique
    • [ ] argsort
    • [x] isfinite
    • [x] isinf
    • [x] isnan
    • [ ] sort
    • [ ] topk
    • [ ] cdist
    • [ ] flip
    • [ ] addbmm
    • [ ] addmm
    opened by PaParaZz1 2
  • dev(hansbug): add stream support for paralleling the calculations in tree

    dev(hansbug): add stream support for paralleling the calculations in tree

    Here is an example:

    import time
    
    import numpy as np
    import torch
    
    import treetensor.torch as ttorch
    
    N, M, T = 200, 2, 50
    S1, S2, S3 = 512, 1024, 2048
    
    
    def test_min():
        a = ttorch.randn({f'a{i}': (S1, S2) for i in range(N // M)}, device='cuda')
        b = ttorch.randn({f'a{i}': (S2, S3) for i in range(N // M)}, device='cuda')
    
        result = []
        for i in range(T):
            _start_time = time.time()
    
            _ = ttorch.matmul(a, b)
            torch.cuda.synchronize()
    
            _end_time = time.time()
            result.append(_end_time - _start_time)
    
        print('time cost: mean({}) std({})'.format(np.mean(result), np.std(result)))
    
    
    def test_native():
        a = {f'a{i}': torch.randn(S1, S2, device='cuda') for i in range(N)}
        b = {f'a{i}': torch.randn(S2, S3, device='cuda') for i in range(N)}
    
        result = []
        for i in range(T):
            _start_time = time.time()
    
            for key in a.keys():
                _ = torch.matmul(a[key], b[key])
            torch.cuda.synchronize()
    
            _end_time = time.time()
            result.append(_end_time - _start_time)
    
        print('time cost: mean({}) std({})'.format(np.mean(result), np.std(result)))
    
    
    def test_linear():
        a = ttorch.randn({f'a{i}': (S1, S2) for i in range(N)}, device='cuda')
        b = ttorch.randn({f'a{i}': (S2, S3) for i in range(N)}, device='cuda')
    
        result = []
        for i in range(T):
            _start_time = time.time()
    
            _ = ttorch.matmul(a, b)
            torch.cuda.synchronize()
    
            _end_time = time.time()
            result.append(_end_time - _start_time)
    
        print('time cost: mean({}) std({})'.format(np.mean(result), np.std(result)))
    
    
    def test_stream():
        a = ttorch.randn({f'a{i}': (S1, S2) for i in range(N)}, device='cuda')
        b = ttorch.randn({f'a{i}': (S2, S3) for i in range(N)}, device='cuda')
    
        ttorch.stream(M)
        result = []
        for i in range(T):
            _start_time = time.time()
    
            _ = ttorch.matmul(a, b)
            torch.cuda.synchronize()
    
            _end_time = time.time()
            result.append(_end_time - _start_time)
    
        print('time cost: mean({}) std({})'.format(np.mean(result), np.std(result)))
    
    
    def warmup():
        # warm up
        a = torch.randn(1024, 1024).cuda()
        b = torch.randn(1024, 1024).cuda()
        for _ in range(20):
            c = torch.matmul(a, b)
    
    
    if __name__ == '__main__':
        warmup()
        test_min()
        test_native()
        test_linear()
        test_stream()
    
    

    不过讲真,这个stream实际效果挺脆弱的,非常看tensor尺寸,大了小了都不行,GPU性能不够也不行,一弄不好还容易负优化,总之挺难伺候的。这部分如果想实用化的话得再研究研究。

    enhancement 
    opened by HansBug 1
  • Failure when try to convert between numpy and torch on Windows Python3.10

    Failure when try to convert between numpy and torch on Windows Python3.10

    See here: https://github.com/opendilab/DI-treetensor/runs/7820313811?check_suite_focus=true

    The bug is like

        @method_treelize(return_type=_get_tensor_class)
        def tensor(self: numpy.ndarray, *args, **kwargs):
    >       tensor_: torch.Tensor = torch.from_numpy(self)
    E       RuntimeError: Numpy is not available
    

    The only way I found to 'solve' this is to downgrade python to version3.9 to lower. So these tests will be skipped temporarily.

    bug 
    opened by HansBug 0
Releases(v0.4.0)
  • v0.4.0(Aug 14, 2022)

    What's Changed

    • dev(hansbug): remove support for py3.6 by @HansBug in https://github.com/opendilab/DI-treetensor/pull/12
    • pytorch upgrade to 1.12 by @zjowowen in https://github.com/opendilab/DI-treetensor/pull/11
    • dev(hansbug): add test for torch1.12.0 and python3.10 by @HansBug in https://github.com/opendilab/DI-treetensor/pull/13
    • dev(hansbug): add stream support for paralleling the calculations in tree by @HansBug in https://github.com/opendilab/DI-treetensor/pull/10

    New Contributors

    • @zjowowen made their first contribution in https://github.com/opendilab/DI-treetensor/pull/11

    Full Changelog: https://github.com/opendilab/DI-treetensor/compare/v0.3.0...v0.4.0

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Jul 15, 2022)

    What's Changed

    • dev(hansbug): use newer version of treevalue 1.4.1 by @HansBug in https://github.com/opendilab/DI-treetensor/pull/9

    Full Changelog: https://github.com/opendilab/DI-treetensor/compare/v0.2.1...v0.3.0

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Mar 22, 2022)

    What's Changed

    • fix(hansbug): fix uncompitable problem with walk by @HansBug in https://github.com/opendilab/DI-treetensor/pull/5
    • dev(hansbug): add tensor method for treetensor.numpy.ndarray by @HansBug in https://github.com/opendilab/DI-treetensor/pull/6
    • fix(hansbug): add subside support to all the functions. by @HansBug in https://github.com/opendilab/DI-treetensor/pull/7
    • doc(hansbug): add documentation for np.stack, np.split and other 3 functions. by @HansBug in https://github.com/opendilab/DI-treetensor/pull/8
    • release(hansbug): use version 0.2.1 by @HansBug in https://github.com/opendilab/DI-treetensor/pull/4

    New Contributors

    • @HansBug made their first contribution in https://github.com/opendilab/DI-treetensor/pull/5

    Full Changelog: https://github.com/opendilab/DI-treetensor/compare/v0.2.0...v0.2.1

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jan 4, 2022)

    • Use newer version of treevalue>=1.2.0
    • Add support of torch 1.10.0
    • Add support of potc

    Full Changelog: https://github.com/opendilab/DI-treetensor/compare/v0.1.0...v0.2.0

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Dec 26, 2021)

  • v0.0.1(Sep 30, 2021)

Owner
OpenDILab
Open sourced Decision Intelligence (DI)
OpenDILab
a reimplementation of Holistically-Nested Edge Detection in PyTorch

pytorch-hed This is a personal reimplementation of Holistically-Nested Edge Detection [1] using PyTorch. Should you be making use of this work, please

Simon Niklaus 375 Dec 6, 2022
PyTorch framework, for reproducing experiments from the paper Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks

Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks. Code, based on the PyTorch framework, for reprodu

Asaf 3 Dec 27, 2022
A mini lib that implements several useful functions binding to PyTorch in C++.

Torch-gather A mini library that implements several useful functions binding to PyTorch in C++. What does gather do? Why do we need it? When dealing w

maxwellzh 8 Sep 7, 2022
U2-Net: Going Deeper with Nested U-Structure for Salient Object Detection

The code for our newly accepted paper in Pattern Recognition 2020: "U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection."

Xuebin Qin 6.5k Jan 9, 2023
Aggragrating Nested Transformer Official Jax Implementation

NesT is a simple method, which aggragrates nested local transformers on image blocks. The idea makes vision transformers attain better accuracy, data efficiency, and convergence on the ImageNet benchmark. NesT can be scaled to small datasets to match convnet accuracy.

Google Research 169 Dec 20, 2022
Code for Two-stage Identifier: "Locate and Label: A Two-stage Identifier for Nested Named Entity Recognition"

Code for Two-stage Identifier: "Locate and Label: A Two-stage Identifier for Nested Named Entity Recognition", accepted at ACL 2021. For details of the model and experiments, please see our paper.

tricktreat 87 Dec 16, 2022
Compute execution plan: A DAG representation of work that you want to get done. Individual nodes of the DAG could be simple python or shell tasks or complex deeply nested parallel branches or embedded DAGs themselves.

Hello from magnus Magnus provides four capabilities for data teams: Compute execution plan: A DAG representation of work that you want to get done. In

null 12 Feb 8, 2022
Simulating Sycamore quantum circuits classically using tensor network algorithm.

Simulating the Sycamore quantum supremacy circuit This repo contains data we have obtained in simulating the Sycamore quantum supremacy circuits with

Feng Pan 46 Nov 17, 2022
Spectral Tensor Train Parameterization of Deep Learning Layers

Spectral Tensor Train Parameterization of Deep Learning Layers This repository is the official implementation of our AISTATS 2021 paper titled "Spectr

Anton Obukhov 12 Oct 23, 2022
TuckER: Tensor Factorization for Knowledge Graph Completion

TuckER: Tensor Factorization for Knowledge Graph Completion This codebase contains PyTorch implementation of the paper: TuckER: Tensor Factorization f

Ivana Balazevic 296 Dec 6, 2022