# The goal of this library is to generate more helpful exception messages for numpy/pytorch matrix algebra expressions.

### Related tags

Pytorch Utilities debugging numpy vector matrix pytorch tracing

# Tensor Sensor

One of the biggest challenges when writing code to implement deep learning networks, particularly for us newbies, is getting all of the tensor (matrix and vector) dimensions to line up properly. It's really easy to lose track of tensor dimensionality in complicated expressions involving multiple tensors and tensor operations. Even when just feeding data into predefined Tensorflow network layers, we still need to get the dimensions right. When you ask for improper computations, you're going to run into some less than helpful exception messages.

To help myself and other programmers debug tensor code, I built this library. TensorSensor clarifies exceptions by augmenting messages and visualizing Python code to indicate the shape of tensor variables (see figure to the right for a teaser). It works with Tensorflow, PyTorch, JAX, and Numpy, as well as higher-level libraries like Keras and fastai.

TensorSensor is currently at 0.1 (Dec 2020) so I'm happy to receive issues created at this repo or direct email.

## Visualizations

For more, see examples.ipynb.

import torch
W = torch.rand(d,n_neurons)
b = torch.rand(n_neurons,1)
X = torch.rand(n,d)
with tsensor.clarify():
Y = W @ X.T + b

Displays this in a jupyter notebook or separate window:

Instead of the following default exception message:

RuntimeError: size mismatch, m1: [764 x 100], m2: [764 x 200] at /tmp/pip-req-build-as628lz5/aten/src/TH/generic/THTensorMath.cpp:41


TensorSensor augments the message with more information about which operator caused the problem and includes the shape of the operands:

Cause: @ on tensor operand W w/shape [764, 100] and operand X.T w/shape [764, 200]


You can also get the full computation graph for an expression that includes all of these sub result shapes.

tsensor.astviz("b = W@b + (h+3).dot(h) + torch.abs(torch.tensor(34))", sys._getframe())

yields the following abstract syntax tree with shapes:

## Install

pip install tensor-sensor             # This will only install the library for you
pip install tensor-sensor[torch]      # install pytorch related dependency
pip install tensor-sensor[tensorflow] # install tensorflow related dependency
pip install tensor-sensor[jax]        # install jax, jaxlib
pip install tensor-sensor[all]        # install tensorflow, pytorch, jax


which gives you module tsensor. I developed and tested with the following versions

$pip list | grep -i flow tensorflow 2.3.0 tensorflow-estimator 2.3.0$ pip list | grep -i numpy
numpy                              1.18.5
numpydoc                           1.1.0
$pip list | grep -i torch torch 1.6.0$ pip list | grep -i jax
jax                                0.2.6
jaxlib                             0.1.57


### Graphviz for tsensor.astviz()

For displaying abstract syntax trees (ASTs) with tsensor.astviz(...), then you need the dot executable from graphviz, not just the python library.

On Mac, do this before or after tensor-sensor install:

brew install graphviz


On Windows, apparently you need

conda install python-graphviz  # Do this first; get's dot executable and py lib
pip install tensor-sensor      # Or one of the other installs


## Limitations

I rely on parsing lines that are assignments or expressions only so the clarify and explain routines do not handle methods expressed like:

def bar(): b + x * 3


def bar():
b + x * 3


watch out for side effects! I don't do assignments, but any functions you call with side effects will be done while I reevaluate statements.

Can't handle \ continuations.

With Python threading package, don't use multiple threads calling clarify(). multiprocessing package should be fine.

Also note: I've built my own parser to handle just the assignments / expressions tsensor can handle.

## Deploy (parrt's use)

$python setup.py sdist upload  Or download and install locally $ cd ~/github/tensor-sensor
\$ pip install .

### TODO

• can i call pyviz in debugger?
• #### Optional dependencies not working properly

• Issue: For some reason both pip install tensor-sensor pip install tensor-sensor[torch] attempt to install Tensorflow too.

• Environment:

• win10 latest (10.10.2020)
• conda 4.8.3 virtual env
• pytorch 1.6.0 installed via conda (the official way)
• no tensorflow
• Workaround: pip install tensor-sensor --no-deps pip install graphviz

build
opened by ColdTeapot273K 10
• #### Supporting JAX

Hi,

Thanks for the awesome library! This has really made my debugging life much easier.

Just a question. Is there any plan to support JAX? I think this can be similarly supported since the API of JAX almost looks identical to NumPy.

compatibility
opened by ethanluoyc 8
• #### Remove hard torch dependecies for keras/tensorflow user

Currently, in the analysis.py _shape method it always try to check if torch.Size exist. So for a keras user, if they don't have torch install, it will throw an error since analysis.py is importing it.

  File "/home/shawley/Downloads/tensor-sensor/tsensor/analysis.py", line 27, in <module>
import torch
ModuleNotFoundError: No module named 'torch'


Related #8

enhancement compatibility
opened by noklam 5
• #### executing and pure_eval

Hi! I stumbled across this library and noticed I could help. I've written a couple of libraries that are great for this stuff:

• https://github.com/alexmojaki/executing
• https://github.com/alexmojaki/pure_eval

Here is a demo of how you could use it for this kind of project:

import ast

import executing
import pure_eval
import sys

def explain_error():
ex = executing.Source.executing(sys.exc_info()[2])
if not (ex.node and isinstance(ex.node, ast.BinOp)):
return

evaluator = pure_eval.Evaluator.from_frame(ex.frame)
atok = ex.source.asttokens()

try:
f"{atok.get_text(ex.node.left)} = {evaluator[ex.node.left]!r} and "
f"{atok.get_text(ex.node.right)} = {evaluator[ex.node.right]!r}")
except pure_eval.CannotEval:
print(f"Cannot safely evaluate operands of {ex.text()}. Extract them into variables.")

a = ["abc", 3]

try:
print(a[0] + a[1])
except:
explain_error()

try:
print("only print once") + 3
except:
explain_error()


To run this you will need to pip install executing pure_eval asttokens.

This should improve the parsing and such significantly. For example this will handle line continuations just fine. pure_eval will only evaluate simple expressions to avoid accidentally triggering side effects.

This uses the ast module from the standard library. Is there a reason you wrote your own parser? The best place to learn about ast is here: https://greentreesnakes.readthedocs.io/en/latest/

I'll let you integrate it into your code yourself, but let me know if you have questions.

suggestion
opened by alexmojaki 5
• #### pip install submodule to avoid install all dependecies

Very often, people already have the tensor library they are using, keras, torch, tensorflow if they are coming for this library. Currently, the package tries to install all dependencies. For example, if I am using PyTorch, I don't really need to install the big TensorFlow library in the environment.

 pip install tensor-sensor[all]
pip install tensor-sensor[torch]
pip install tensor-sensor[tensorflow]

build compatibility hacktoberfest-accepted
opened by noklam 4
• #### feat: explainer support pdf

Enable PDF (and other supported formats) savefig in explainer by removing hardcoded extension

import torch
W = torch.rand(d,n_neurons)
b = torch.rand(n_neurons,1)
X = torch.rand(n,d)
with ts.explain(savefig="my_inspection.pdf"):
Y = W @ X.T + b

enhancement
opened by sbrugman 3
• #### Add tensor element type info

From @sbrugman:

Our concrete issue was with Pytorch (unexpectedly) converting tensors with only integers to float, which later in the program resulted in an error because it could not be used as an index. Another issue was changing size from 32 to 64 bit floats.

It's indeed the element type of the matrix.

There are multiple somewhat related issues: https://discuss.pytorch.org/t/problems-with-target-arrays-of-int-int32-types-in-loss-functions/140/2 https://discuss.pytorch.org/t/why-pytorch-is-giving-me-hard-time-with-float-long-double-tensor/14678/6

The common denominator between dimensionality debugging is that both type and dimensionality are practically hidden from the user:

import numpy as np
import tsensor as ts

x = np.arange(6, dtype=np.float32)

with ts.explain(savefig="types.pdf"):
print(x.dtype)
print((x*x).dtype)
print((np.sin(x)).dtype)
print((x + np.arange(6)).dtype)
print((np.multiply.outer(x, np.arange(2.0))).dtype)
print((np.outer(x, np.arange(2.0))).dtype)

enhancement
opened by parrt 3
• #### Suppress visualisation of () as operator in tree

Hello.

I am using tensor-sensor to visualise DAGs of the domain specific language Slate, which is used in a code generation framework for Finite element methods called Firedrake. Slate expresses linear algebra operations on tensors. I am using tensor-sensor to visualise the DAG before and after an optimisation pass. An example would be the following:

Before optimisation: triplemul_beforeopt.pdf After optimisation: tripleopt_afteropt.pdf

While the visualisation of the tree is correct in both cases, I would quite like to suppress the node for the brackets (i.e. for the SubExpr node) to avoid confusion about the amount of temporaries generated. Is there already a way of controlling this as a user and if not would there be interest in supporting it?

Best wishes and thanks in advance, Sophia

enhancement
opened by sv2518 3
• #### Seem a problem with np.ones() function

Hi! I really thank u for your brilliant work of tsensor which help me to debug more effectively.

But recently, when I run this code in Jupyter or Pycharm, it always leads to a KeyError:

code: with ts.explain(): a = np.ones(3)

KeyError report: KeyError Traceback (most recent call last) in () 1 with ts.explain(): ----> 2 a = np.ones(3) 3

F:\anaconda_file2\envs\test\lib\site-packages\numpy\core\numeric.py in ones(shape, dtype, order) 206 """ 207 a = empty(shape, dtype, order) --> 208 multiarray.copyto(a, 1, casting='unsafe') 209 return a 210

<array_function internals> in copyto(*args, **kwargs)

F:\anaconda_file2\envs\test\lib\site-packages\tsensor\analysis.py in listener(self, frame, event, arg) 266 267 def listener(self, frame, event, arg): --> 268 module = frame.f_globals['name'] 269 info = inspect.getframeinfo(frame) 270 filename, line = info.filename, info.lineno

KeyError: 'name'

Is there anything I can do to fix this problem? Grateful to gain any feedback!

bug
opened by DemonsHunter 3
• #### Unhandled statements cause exceptions (Was: Nested calls to clarify can raise stacked Exceptions)

Hello,

I created a decorator to call clarify around the forward function of my custom Pytorch models (derived from torch.nn.Module).

Said decorator looks like this:

def clarify(function: callable) -> callable:
""" Clarify decorator."""

def call_clarify(*args, **kwargs):
with tsensor.clarify(fontname="DejaVu Sans"):
return function(*args, **kwargs)

return call_clarify


When doing machine learning using Pytorch, models (derived from torch.nn.Module) can sometimes be "stacked". In a translation task, an EncoderDecoder's forward will call its Decoder's forward, itself calling the forward of an Attention module, for example.

In such a case, this results in nested clarify calls, which raise a succession of Exceptions, because some of the topmost clarify function do not exit correctly. To be more specific, l.124 of analysis.py, self.view can be None, which then raises an Exception on self.view.show().

A quick fix (that I did in local) was adding a check line 131:

                if self.view:
if self.show=='viz':
self.view.show()
augment_exception(exc_value, self.view.offending_expr)


However, I am not sure this would be the best fix possible, as I am not sure whether that is a common problem or not and how/if this is intended to be fixed. What do you think?

enhancement
opened by clefourrier 3
• #### Contribution Guidelines

I've a feeling that at some point many(including me) would like to contribute to this library and it would be great if it had some contribution guidelines.

suggestion
opened by skat00sh 3
• #### Boxes for operands packed too tightly

There is some overlap with these boxes:

import torch
import tsensor

n = 200         # number of instances
d = 764         # number of instance features
nhidden = 256

Whh = torch.eye(nhidden, nhidden)  # Identity matrix
Uxh = torch.randn(nhidden, d)
bh  = torch.zeros(nhidden, 1)
h = torch.randn(nhidden, 1)         # fake previous hidden state h
r = torch.randn(nhidden, 3)         # fake this computation
X = torch.rand(n,d)                 # fake input

with tsensor.explain(savefig):
r*h

bug
opened by parrt 0
• #### Showing too many matrices for complicated operands

The following code generates an exception but instead of showing the result of the operand subexpressions, it shows all bits of it:

import torch
import tsensor

n = 200         # number of instances
d = 764         # number of instance features
nhidden = 256

Whh = torch.eye(nhidden, nhidden)  # Identity matrix
Uxh = torch.randn(nhidden, d)
bh  = torch.zeros(nhidden, 1)
h = torch.randn(nhidden, 1)         # fake previous hidden state h
# r = torch.randn(nhidden, 1)         # fake this computation
r = torch.randn(nhidden, 3)         # fake this computation
X = torch.rand(n,d)                 # fake input

# Following code raises an exception
with tsensor.clarify():
h = torch.tanh(Whh @ (r*h) + Uxh @ X.T + bh)  # state vector update equation

bug enhancement
opened by parrt 0
• #### Improvement: See into nn.Sequential models

The following exception not only generates a huge stack trace but also TensorSensor gives an error message augmentation indicating that Y = model(X) is the issue because it does not descend into tensor library code. It would be better to allow it to see inside the model pipeline. so that it can notice that the error is actually here:

nn.Linear(10, n_neurons)


which should be

nn.Linear(n_neurons, 10)


Here's the full example:

from torch import nn
n = 20
n_neurons = 50
model = nn.Sequential(
nn.Linear(784, n_neurons), # 28x28 flattened image
nn.ReLU(),
nn.Linear(10, n_neurons),  # 10 output classes (0-9) <---- ooops! reverse those
nn.Softmax(dim=1)
)
X = torch.rand(n,784) # n instances of feature vectors with 784 pixels
with tsensor.clarify():
Y = model(X)


The error message we get is here:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
1 with tsensor.clarify():
----> 2     Y = model(X)

~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051             return forward_call(*input, **kwargs)
1052         # Do not call functions when jit is used
1053         full_backward_hooks, non_full_backward_hooks = [], []

~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/container.py in forward(self, input)
137     def forward(self, input):
138         for module in self:
--> 139             input = module(input)
140         return input
141

~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051             return forward_call(*input, **kwargs)
1052         # Do not call functions when jit is used
1053         full_backward_hooks, non_full_backward_hooks = [], []

~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/linear.py in forward(self, input)
94
95     def forward(self, input: Tensor) -> Tensor:
---> 96         return F.linear(input, self.weight, self.bias)
97
98     def extra_repr(self) -> str:

~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1846         return handle_torch_function(linear, (input, weight), input, weight, bias=bias)
1848
1849

RuntimeError: mat1 and mat2 shapes cannot be multiplied (20x50 and 10x50)
Cause: model(X) tensor arg X w/shape [20, 784]

enhancement
opened by parrt 0
• #### various enhancements

great works!

debug these math expression code is a very big problem

not only scalar vector matrix , high rank tensor support

### Visualizing 3D tensors and beyond:

TensorDim is contain : left to right [N,C,H,W] N->C->H->W TensorDim like flatten Onions,cabbage box in box represent dim TensorDim has real represention information (N,C,H,W etc.)

namedtensor support

plaidml dsl Types Tensor TensorDim TensorIndex

represent actual data: audio, 1D plot, text : vector image ,2D plot : matrix etc.

### expression graph:

different color input variable ,leaf parameters/variable with temporal variable / activation : "Road width" elementwise leaf / elementwise or slice or other connection(edge) print AST forward & backward mode

### animation:

expression step computation mode (debug) slice ,reshape, .T() other manipulate N-d array TensorDim/TensorIndex operators example : matmul = m@v m@v = (v.squeeze(0).expand_as(w) * w).sum(1,keepdim=True).unsqueeze(1) more ... conv2d etc

### interactive:

interactive build block(visual programing) reverse interactive(debug) : selected tensor elements and follow it in expression graph selected element : muti view , like convolution-visualizer Input(Inputgrad) Weight(Weightgrad) Output(Outputgrad) views

### NN support:

NN module visualization (conv2d) bigger computation graph : pytorchrec multi computation graph visualization and live debug

einops

enhancement
opened by koke2c95 2
• #### 1.0(Dec 11, 2021)

The library seems to work and is stable so this is the 1.0 release. Thanks to @sbrugman, we now display tensor element type information using both color and text. The color indicates the type (int, float, complex, ...) and the transparency level indicates the precision; the more saturated the color, the higher the precision. Github doesn't seem to render notebooks with pictures very well anymore so take a look at the colab version.

See PR Include dtype information by @sbrugman and then my subsequent changes.

Source code(tar.gz)
Source code(zip)

• #### 0.1a1(Sep 2, 2020)

###### Terence Parr
Creator of the ANTLR parser generator. Professor at Univ of San Francisco, computer science and data science. Working mostly on machine learning stuff now.
###### PyTorch Extension Library of Optimized Scatter Operations

PyTorch Scatter Documentation This package consists of a small extension library of highly optimized sparse update (scatter and segment) operations fo

1.2k Jan 7, 2023
###### higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual training steps.

higher is a library providing support for higher-order optimization, e.g. through unrolled first-order optimization loops, of "meta" aspects of these

1.5k Jan 3, 2023
###### High-level batteries-included neural network training library for Pytorch

Pywick High-Level Training framework for Pytorch Pywick is a high-level Pytorch training framework that aims to get you up and running quickly with st

382 Dec 6, 2022
###### A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API

micrograd A tiny Autograd engine (with a bite! :)). Implements backpropagation (reverse-mode autodiff) over a dynamically built DAG and a small neural

3.5k Jan 8, 2023
###### ocaml-torch provides some ocaml bindings for the PyTorch tensor library.

ocaml-torch provides some ocaml bindings for the PyTorch tensor library. This brings to OCaml NumPy-like tensor computations with GPU acceleration and tape-based automatic differentiation.

369 Jan 3, 2023
###### PyGCL: Graph Contrastive Learning Library for PyTorch

PyGCL is an open-source library for graph contrastive learning (GCL), which features modularized GCL components from published papers, standardized evaluation, and experiment management.

592 Jan 7, 2023
###### PyNIF3D is an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation.

PyNIF3D is an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation. It aims to accelerate research by providing a modular design that allows for easy extension and combination of NIF-related components, as well as readily available paper implementations and dataset loaders.

96 Nov 28, 2022
###### S3-plugin is a high performance PyTorch dataset library to efficiently access datasets stored in S3 buckets.

S3-plugin is a high performance PyTorch dataset library to efficiently access datasets stored in S3 buckets.

138 Jan 3, 2023
###### Tez is a super-simple and lightweight Trainer for PyTorch. It also comes with many utils that you can use to tackle over 90% of deep learning projects in PyTorch.

Tez: a simple pytorch trainer NOTE: Currently, we are not accepting any pull requests! All PRs will be closed. If you want a feature or something does

1.1k Jan 4, 2023
###### A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.

A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.

56 Sep 13, 2022
###### A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.

A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.

878 Dec 30, 2022
###### Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

251 Dec 25, 2022
###### PyTorch framework A simple and complete framework for PyTorch, providing a variety of data loading and simple task solutions that are easy to extend and migrate

PyTorch framework A simple and complete framework for PyTorch, providing a variety of data loading and simple task solutions that are easy to extend and migrate

12 Dec 19, 2021
###### Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc.

Pretrained models for Pytorch (Work in progress) The goal of this repo is: to help to reproduce research papers results (transfer learning setups for

8.7k Dec 31, 2022
###### Model summary in PyTorch similar to model.summary() in Keras

Keras style model.summary() in PyTorch Keras has a neat API to view the visualization of the model which is very helpful while debugging your network.

3.7k Dec 29, 2022
###### torch-optimizer -- collection of optimizers for Pytorch

torch-optimizer torch-optimizer -- collection of optimizers for PyTorch compatible with optim module. Simple example import torch_optimizer as optim

2.6k Jan 3, 2023
###### A PyTorch implementation of EfficientNet

EfficientNet PyTorch Quickstart Install with pip install efficientnet_pytorch and load a pretrained EfficientNet with: from efficientnet_pytorch impor

7.2k Jan 6, 2023
###### The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.

News March 3: v0.9.97 has various bug fixes and improvements: Bug fixes for NTXentLoss Efficiency improvement for AccuracyCalculator, by using torch i

5k Jan 2, 2023