ML models and internal tensors 3D visualizer

Related tags

Deep Learning viewer
Overview

logo

The free Zetane Viewer is a tool to help understand and accelerate discovery in machine learning and artificial neural networks. It can be used to open the AI black box by visualizing and understanding the model's architecture and internal data (feature maps, weights, biases and layers output tensors). It can be thought of as a tool to do neuroimaging or brain imaging for artificial neural networks and machine learning algorithms.

You can also launch your own Zetane workspace directly from your existing scripts or notebooks via a few commands using the Zetane Python API.

nodes tensors


Zetane Viewer

Installation

You can install the free Zetane viewer for Windows, Linux and Mac, and explore ZTN and ONNX files.

Download for Windows

Download for Linux

Download for Mac

Tutorial

In this video, we will show you how to load a Zetane or ONNX model, navigate the model and view different tensors:

Below is the step-by-step instruction of how to load and inspect a model in the Zetane viewer:

  • How to load a model

The viewer supports both .ONNX and .ZTN files. The ZTN files were generated from the Keras and Pytorch scripts shared in this Git repository. After launching the viewer, to load a Zetane model, simply click “Load Zetane Model” in the DATA I/O menu. To load an Onnx model, click on “Import ONNX Model” in the same menu. Below you can access the ZTN files for a few models to load. You can also access ONNX files from the ONNX Model Zoo.

loading

When a model is displayed in the Zetane engine, any components of the model can be accessed in a few clicks.

At the highest level, we have the model architecture which is composed of interconnected nodes and tensors. Each node represents an operator of the computational graph. Usually, an input tensor is passed to the model and as it goes through the nodes it will be transformed into intermediate tensors until we reach the output tensor of the model. In the Zetane engine, the data flows from left to right.

architecture

  • How to navigate

You may navigate the model viewer window by right clicking and dragging to explore the space and using the scroll wheel to zoom in and out. Here is the complete list of navigation instructions. You can change the behavior of the mouse wheel (either to zoom or to navigate) via the Mouse Zoom toggle in the top menu.

zoom

  • Loading custom model inputs

After loading a model you may want to send your own inputs to the model to inference. Zetane supports loading .npy, .npz, .png, .jpg, .pb (protobuf), .tiff, and .hdr files that match the input dimensions of the model. The Zetane engine will attempt to intelligently resize the file loaded (if possible) in order to send the data to the model. After loading and running the input, you will be able to explore in detail how your model interpreted the input data.

nodes tensors tensors

  • How to inspect different layers and feature maps

For each layer, you have the option to view all the feature maps and filters by clicking on the “Show Feature Maps” on each node. You may inspect the inputs and outputs and weights and biases using the tensor view bar.

featuremap

  • Tensor view bar

By clicking on the associated button, you can visualize inputs, outputs, weights and biases (if applicable) for each individual layer. You can also investigate the shape, type, mean and standard deviation of each tensor.

tensorview

Statistics about the tensor value and its distribution is given in the histogram in the top panel. You can also see the tensor name and shape. The tensor and its values is represented in the middle panel and the bottom section contains tensor visualization parameters and a refresh button which allow the user to refresh the tensor. This is useful when the input or the weights are changing in real-time.

tensorpanel

  • Styles of tensor visualization

Tensors can be inspected in different ways, including 3D view and 2D view with and without actual values.

tensorview2



Tensor View Screenshot
N-dimensional tensor projected in the 3D space tensor_viz_3d
N-dimensional tensor projected in the 2D space tensor_viz_2d
Tensor values and color representations of each value based on the gradient shown on the x-axis of the distribution histogram tensor_viz_color-values
Tensor values__ tensor_viz_values
Feature maps view when the tensor has shape of dimension 3 tensor_viz_values

Models

We have generated a few ZTN models for inspecting their architecture and internal tensors in the viewer. We have also provided the code used to generate these models.

Image Classification

Object Detection

Image Segmentation

Body, Face and Gesture Analysis

Image Manipulation

XAI

Classic Machine Learning


Installation

Install the Zetane Viewer here.


Comments
  • BUG: Viewer crashes when loading any model

    BUG: Viewer crashes when loading any model

    I've tried loading multiple models including emotion-ferplus (both onnx and ztn formats) but they always immediately crash the viewer.

    OS: Ubuntu 20.04 Zetane 1.3.2 Dump:

    LoadUniverse(): ZTN_REQUIRE_LOGIN = 1 
    online = 0 
    ================== ExposeIRnodes: ================== 
    @@@ ExposeIRnodes() n_IR_outputs = 51. 
     <- [Parameter1367_reshape1. 
     <- [Minus340_Output_0. 
     <- [Block352_Output_0. 
     <- [Convolution362_Output_0. 
     <- [Plus364_Output_0. 
     <- [ReLU366_Output_0. 
     <- [Convolution380_Output_0. 
     <- [Plus382_Output_0. 
     <- [ReLU384_Output_0. 
     <- [Pooling398_Output_0. 
     <- [Dropout408_Output_0. 
     <- [Convolution418_Output_0. 
     <- [Plus420_Output_0. 
     <- [ReLU422_Output_0. 
     <- [Convolution436_Output_0. 
     <- [Plus438_Output_0. 
     <- [ReLU440_Output_0. 
     <- [Pooling454_Output_0. 
     <- [Dropout464_Output_0. 
     <- [Convolution474_Output_0. 
     <- [Plus476_Output_0. 
     <- [ReLU478_Output_0. 
     <- [Convolution492_Output_0. 
     <- [Plus494_Output_0. 
     <- [ReLU496_Output_0. 
     <- [Convolution510_Output_0. 
     <- [Plus512_Output_0. 
     <- [ReLU514_Output_0. 
     <- [Pooling528_Output_0. 
     <- [Dropout538_Output_0. 
     <- [Convolution548_Output_0. 
     <- [Plus550_Output_0. 
     <- [ReLU552_Output_0. 
     <- [Convolution566_Output_0. 
     <- [Plus568_Output_0. 
     <- [ReLU570_Output_0. 
     <- [Convolution584_Output_0. 
     <- [Plus586_Output_0. 
     <- [ReLU588_Output_0. 
     <- [Pooling602_Output_0. 
     <- [Dropout612_Output_0. 
     <- [Dropout612_Output_0_reshape0. 
     <- [Times622_Output_0. 
     <- [Plus624_Output_0. 
     <- [ReLU636_Output_0. 
     <- [Dropout646_Output_0. 
     <- [Times656_Output_0. 
     <- [Plus658_Output_0. 
     <- [ReLU670_Output_0. 
     <- [Dropout680_Output_0. 
     <- [Times690_Output_0. 
    node_name = Node_0000000000_Times622_reshape1_Reshape. 
     -> [Parameter1367_reshape1. 
    node_name = Node_0000000001_Minus340_Sub. 
     -> [Minus340_Output_0. 
    node_name = Node_0000000002_Block352_Div. 
     -> [Block352_Output_0. 
    node_name = Node_0000000003_Convolution362_Conv. 
     -> [Convolution362_Output_0. 
    node_name = Node_0000000004_Plus364_Add. 
     -> [Plus364_Output_0. 
    node_name = Node_0000000005_ReLU366_Relu. 
     -> [ReLU366_Output_0. 
    node_name = Node_0000000006_Convolution380_Conv. 
     -> [Convolution380_Output_0. 
    node_name = Node_0000000007_Plus382_Add. 
     -> [Plus382_Output_0. 
    node_name = Node_0000000008_ReLU384_Relu. 
     -> [ReLU384_Output_0. 
    node_name = Node_0000000009_Pooling398_MaxPool. 
     -> [Pooling398_Output_0. 
    node_name = Node_0000000010_Dropout408_Dropout. 
     -> [Dropout408_Output_0. 
    node_name = Node_0000000011_Convolution418_Conv. 
     -> [Convolution418_Output_0. 
    node_name = Node_0000000012_Plus420_Add. 
     -> [Plus420_Output_0. 
    node_name = Node_0000000013_ReLU422_Relu. 
     -> [ReLU422_Output_0. 
    node_name = Node_0000000014_Convolution436_Conv. 
     -> [Convolution436_Output_0. 
    node_name = Node_0000000015_Plus438_Add. 
     -> [Plus438_Output_0. 
    node_name = Node_0000000016_ReLU440_Relu. 
     -> [ReLU440_Output_0. 
    node_name = Node_0000000017_Pooling454_MaxPool. 
     -> [Pooling454_Output_0. 
    node_name = Node_0000000018_Dropout464_Dropout. 
     -> [Dropout464_Output_0. 
    node_name = Node_0000000019_Convolution474_Conv. 
     -> [Convolution474_Output_0. 
    node_name = Node_0000000020_Plus476_Add. 
     -> [Plus476_Output_0. 
    node_name = Node_0000000021_ReLU478_Relu. 
     -> [ReLU478_Output_0. 
    node_name = Node_0000000022_Convolution492_Conv. 
     -> [Convolution492_Output_0. 
    node_name = Node_0000000023_Plus494_Add. 
     -> [Plus494_Output_0. 
    node_name = Node_0000000024_ReLU496_Relu. 
     -> [ReLU496_Output_0. 
    node_name = Node_0000000025_Convolution510_Conv. 
     -> [Convolution510_Output_0. 
    node_name = Node_0000000026_Plus512_Add. 
     -> [Plus512_Output_0. 
    node_name = Node_0000000027_ReLU514_Relu. 
     -> [ReLU514_Output_0. 
    node_name = Node_0000000028_Pooling528_MaxPool. 
     -> [Pooling528_Output_0. 
    node_name = Node_0000000029_Dropout538_Dropout. 
     -> [Dropout538_Output_0. 
    node_name = Node_0000000030_Convolution548_Conv. 
     -> [Convolution548_Output_0. 
    node_name = Node_0000000031_Plus550_Add. 
     -> [Plus550_Output_0. 
    node_name = Node_0000000032_ReLU552_Relu. 
     -> [ReLU552_Output_0. 
    node_name = Node_0000000033_Convolution566_Conv. 
     -> [Convolution566_Output_0. 
    node_name = Node_0000000034_Plus568_Add. 
     -> [Plus568_Output_0. 
    node_name = Node_0000000035_ReLU570_Relu. 
     -> [ReLU570_Output_0. 
    node_name = Node_0000000036_Convolution584_Conv. 
     -> [Convolution584_Output_0. 
    node_name = Node_0000000037_Plus586_Add. 
     -> [Plus586_Output_0. 
    node_name = Node_0000000038_ReLU588_Relu. 
     -> [ReLU588_Output_0. 
    node_name = Node_0000000039_Pooling602_MaxPool. 
     -> [Pooling602_Output_0. 
    node_name = Node_0000000040_Dropout612_Dropout. 
     -> [Dropout612_Output_0. 
    node_name = Node_0000000041_Times622_reshape0_Reshape. 
     -> [Dropout612_Output_0_reshape0. 
    node_name = Node_0000000042_Times622_MatMul. 
     -> [Times622_Output_0. 
    node_name = Node_0000000043_Plus624_Add. 
     -> [Plus624_Output_0. 
    node_name = Node_0000000044_ReLU636_Relu. 
     -> [ReLU636_Output_0. 
    node_name = Node_0000000045_Dropout646_Dropout. 
     -> [Dropout646_Output_0. 
    node_name = Node_0000000046_Times656_MatMul. 
     -> [Times656_Output_0. 
    node_name = Node_0000000047_Plus658_Add. 
     -> [Plus658_Output_0. 
    node_name = Node_0000000048_ReLU670_Relu. 
     -> [ReLU670_Output_0. 
    node_name = Node_0000000049_Dropout680_Dropout. 
     -> [Dropout680_Output_0. 
    node_name = Node_0000000050_Times690_MatMul. 
     -> [Times690_Output_0. 
    node_name = Node_0000000051_Plus692_Add. 
    @@@ ExposeIRnodes() [Outputs] = 1 --> 52. 
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
    ***************** ValidateIRnodes: ***************** 
    ====================================  
    input_dims = [ 1, 1, 64, 64, ]. 
    --> input 0[Input3]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ ]. 
    --> input 1[Constant339]: Type 1; [0 dims] tensor  
    --------------- 
    input_dims = [ ]. 
    --> input 2[Constant343]: Type 1; [0 dims] tensor  
    --------------- 
    input_dims = [ 64, 1, 3, 3, ]. 
    --> input 3[Parameter3]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 64, 1, 1, ]. 
    --> input 4[Parameter4]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 64, 64, 3, 3, ]. 
    --> input 5[Parameter23]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 64, 1, 1, ]. 
    --> input 6[Parameter24]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 128, 64, 3, 3, ]. 
    --> input 7[Parameter63]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 128, 1, 1, ]. 
    --> input 8[Parameter64]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 128, 128, 3, 3, ]. 
    --> input 9[Parameter83]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 128, 1, 1, ]. 
    --> input 10[Parameter84]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 128, 3, 3, ]. 
    --> input 11[Parameter575]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 12[Parameter576]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 13[Parameter595]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 14[Parameter596]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 15[Parameter615]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 16[Parameter616]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 17[Parameter655]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 18[Parameter656]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 19[Parameter675]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 20[Parameter676]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 21[Parameter695]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 22[Parameter696]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 2, ]. 
    --> input 23[Dropout612_Output_0_reshape0_shape]: Type 7; [1 dims] tensor  
    --------------- 
    input_dims = [ 256, 4, 4, 1024, ]. 
    --> input 24[Parameter1367]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 2, ]. 
    --> input 25[Parameter1367_reshape1_shape]: Type 7; [1 dims] tensor  
    --------------- 
    input_dims = [ 1024, ]. 
    --> input 26[Parameter1368]: Type 1; [1 dims] tensor  
    --------------- 
    input_dims = [ 1024, 1024, ]. 
    --> input 27[Parameter1403]: Type 1; [2 dims] tensor  
    --------------- 
    input_dims = [ 1024, ]. 
    --> input 28[Parameter1404]: Type 1; [1 dims] tensor  
    --------------- 
    input_dims = [ 1024, 8, ]. 
    --> input 29[Parameter1693]: Type 1; [2 dims] tensor  
    --------------- 
    input_dims = [ 8, ]. 
    --> input 30[Parameter1694]: Type 1; [1 dims] tensor  
    --------------- 
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
    output_dims = [ 1, 8, ]. 
    --> output 0/1[Plus692_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 4096, 1024, ]. 
    --> output 1/1[Parameter1367_reshape1]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1, 64, 64, ]. 
    --> output 2/1[Minus340_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 1, 64, 64, ]. 
    --> output 3/1[Block352_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 4/1[Convolution362_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 5/1[Plus364_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 6/1[ReLU366_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 7/1[Convolution380_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 8/1[Plus382_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 9/1[ReLU384_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 32, 32, ]. 
    --> output 10/1[Pooling398_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 32, 32, ]. 
    --> output 11/1[Dropout408_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 12/1[Convolution418_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 13/1[Plus420_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 14/1[ReLU422_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 15/1[Convolution436_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 16/1[Plus438_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 17/1[ReLU440_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 16, 16, ]. 
    --> output 18/1[Pooling454_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 16, 16, ]. 
    --> output 19/1[Dropout464_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 20/1[Convolution474_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 21/1[Plus476_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 22/1[ReLU478_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 23/1[Convolution492_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 24/1[Plus494_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 25/1[ReLU496_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 26/1[Convolution510_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 27/1[Plus512_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 28/1[ReLU514_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 29/1[Pooling528_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 30/1[Dropout538_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 31/1[Convolution548_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 32/1[Plus550_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 33/1[ReLU552_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 34/1[Convolution566_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 35/1[Plus568_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 36/1[ReLU570_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 37/1[Convolution584_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 38/1[Plus586_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 39/1[ReLU588_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 4, 4, ]. 
    --> output 40/1[Pooling602_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 4, 4, ]. 
    --> output 41/1[Dropout612_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 4096, ]. 
    --> output 42/1[Dropout612_Output_0_reshape0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 43/1[Times622_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 44/1[Plus624_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 45/1[ReLU636_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 46/1[Dropout646_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 47/1[Times656_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 48/1[Plus658_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 49/1[ReLU670_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 50/1[Dropout680_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 8, ]. 
    --> output 51/1[Times690_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 
    ValidateIRnodes() 52 --> 52=52=52 valid output tensors  
    --------------- 
    ----------------- ValidateIRnodes. ----------------- 
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 
    *** ExposeIRnodes: 1 --> 52 Outputs. 
    *** type: [FLOAT] ~?= STRING 
    TVZ10()  input_dims = [ 1, 1, 64, 64, ]. 
    --> input [0]: Type FLOAT; [4 dims] tensor  
    --------------- 
    Warning: Could not load "/opt/zetane/lib/graphviz/libgvplugin_pango.so.6" - file not found
    terminate called after throwing an instance of 'std::invalid_argument'
      what():  stod
    /usr/bin/zetane: line 26: 37102 Aborted                 (core dumped) ./Zetane --server
    
    
    opened by paulgavrikov 4
  • Free Trial not Available

    Free Trial not Available

    After clicking "upgrade 2 pro", I arrive at your pricing page. Clicking on "free trial" redirects me to the documentation, which instructs me to click the button "upgrade 2 pro". Now I'm stuck in an infinite loop and unhappy about it.

    I'd like to successfully exit this loop and try your product. Any Tips?

    opened by Whadup 3
  • sorry, i install deb in ubuntu20.04,but when i use it to load input jpg ,it crash,how can i get the log to find result

    sorry, i install deb in ubuntu20.04,but when i use it to load input jpg ,it crash,how can i get the log to find result

    (base) chenxin@chenxin-Nitro-AN515-52:~/下载$ sudo dpkg -i Zetane-1.7.0.deb (正在读取数据库 ... 系统当前共安装有 330200 个文件和目录。) 准备解压 Zetane-1.7.0.deb ... 正在解压 zetane (1.7.0) 并覆盖 (1.7.0) ... 正在设置 zetane (1.7.0) ... 正在处理用于 gnome-menus (3.36.0-1ubuntu1) 的触发器 ... 正在处理用于 desktop-file-utils (0.24-1ubuntu3) 的触发器 ... 正在处理用于 mime-support (3.64ubuntu1) 的触发器 ... 正在处理用于 hicolor-icon-theme (0.17-2) 的触发器 ...

    opened by mathpopo 2
  • engine is not launched after running example 'hello world' code

    engine is not launched after running example 'hello world' code

    By following the guide here https://docs.zetane.com/getting_started.html#installation, I created a scripy to run the 'hello world' code. However, the engine was launched not shown anything

    OS: Windows 10.0 Zetane 1.7.0

    Console output:

    Dialing Zetane... Did not connect! Dialing Zetane... Did not connect! Dialing Zetane... Did not connect! running process: /usr/bin/zetane --server 127.0.0.1 --port 4004 Dialing Zetane... Did not connect! Dialing Zetane... Did not connect! Dialing Zetane... Connected to Zetane Engine! image

    opened by wftubby 0
  • engine is not launched after running example 'hello world' code

    engine is not launched after running example 'hello world' code

    By following the guide here https://docs.zetane.com/getting_started.html#installation, I created a scripy to run the 'hello world' code. However, the engine was not launched but keep printing "Dialing Zetane"

    OS: Ubuntu 18.04 Zetane 1.7.0

    Console output:

    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    running process: /usr/bin/zetane --server 127.0.0.1 --port 4004
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    
    
    opened by akzing-hz 6
Releases(v1.7.4)
  • v1.7.4(Jun 1, 2022)

    Viewer Engine

    • Added support for ONNX 1.10.2
    • Added support for ONNX Runtime 1.10.0
    • Added support for Keras/TensorFlow 2.9.1
    • Improved progress notifications when loading Keras models
    • Fixed crash cause by nested Keras models.
    • Reduced Tensor viewer memory usage
    • Dropped support for Ubuntu 16.04 LTS. See the up-to-date Minimum Requirements.
    • Deprecated support for macOS 10.14 Mojave

    API

    • Added the Zetane API context manager to automate view updates and cleanup, resulting in less verbose code.
    • Added support for Python 3.9
    • Dropped support for Python 3.6
    • Fixed protobuf dependency versioning
    Source code(tar.gz)
    Source code(zip)
    Zetane-1.7.4.deb(273.45 MB)
    Zetane-1.7.4.dmg(312.91 MB)
    Zetane-1.7.4.msi(300.01 MB)
  • 1.7.0(Nov 15, 2021)

  • 1.6.2(Sep 22, 2021)

    • Added output blocks for models to prevent navigation to the end of the model graph
    • Added a Top-K output view for tensors that match certain shapes, e.g. (1, N). Classification models now have a more human understandable output.
    • Update to onnxruntime 1.8.1 to support latest ONNX opset.
    • Improve autodetection of input shapes to allow more inputs to pass inference without shape errors.
    • Fixes for RAM overuse
    • Fixes for Mesh API
    Source code(tar.gz)
    Source code(zip)
    Zetane-1.6.2.deb(347.23 MB)
    Zetane-1.6.2.dmg(326.94 MB)
    Zetane-1.6.2.msi(301.44 MB)
  • 1.5.0(Jun 16, 2021)

  • 1.4.0(May 26, 2021)

  • 1.3.0(Apr 21, 2021)

    • When ONNX models are loaded, an inference pass with sample data is run by default. That means all tensors / feature maps / weights / biases should be viewable immediately after input load. Please let us know if there are models that don't succeed at this initial pass so we can fix them!
    Screen Shot 2021-04-21 at 11 55 29 AM

    (PRO) User input nodes are now attached to the model architecture diagram. When using Zetane Viewer Pro ($15/month) you can load custom inputs and send them through the model. Currently supported formats are .npy, .npz, .pb, and the majority of image formats (jpg, png, tiff, hdr, pic). Screen Shot 2021-04-21 at 11 53 16 AM Screen Shot 2021-04-21 at 12 10 19 PM Screen Shot 2021-04-21 at 11 54 01 AM

    (PRO) When user inputs are misshapen, the engine will display an error about the model's shape expectation. Note that this feature is also usable by free users without the error popup, the input node will load the user input and show dimensions before attempting to run inference with the model. Screen Shot 2021-04-21 at 11 59 54 AM Screen Shot 2021-04-21 at 12 00 10 PM

    (PRO) Any errors during model inference will also appear in the UI. An example is the shape error above. Individual graph operations may fail at any point during the inference pass-- the engine will attempt to populate the graph outputs up until the point of the error, a stack trace of the model run.

    As always, we welcome feedback, bug reports, and any suggestions you might have.

    Source code(tar.gz)
    Source code(zip)
    Zetane-1.3.0.deb(395.53 MB)
    Zetane-1.3.0.dmg(451.85 MB)
    Zetane-1.3.0.msi(452.15 MB)
  • 1.2.0(Apr 5, 2021)

    • Shape mismatch errors for running model inference are shown in the UI, describing the expected input and the given input. (PRO)
    Screen Shot 2021-04-05 at 12 02 59 PM
    • Changed default UI interaction with a mouse wheel to zoom by default, right click to drag the UI.
    • Panels now scroll or move on hover, not just after being selected.
    • Tensor viewer displays the original shape from file or API, without reordering the dimensions to fit the view panel.
    • User notification for version upgrade now appears in the UI.
    • Mac / Linux now run in API mode by default.
    • Added a new ZTN snapshot for XAI features.
    • User inputs now show above the Model Explorer panel's input node.
    • A number of bug fixes and performance improvements
    Source code(tar.gz)
    Source code(zip)
    Zetane-1.2.0.deb(373.68 MB)
    Zetane-1.2.0.dmg(451.16 MB)
    Zetane-1.2.0.msi(438.49 MB)
  • 1.1.4(Feb 22, 2021)

Owner
Zetane Systems
Zetane Systems
Tensors and neural networks in Haskell

Hasktorch Hasktorch is a library for tensors and neural networks in Haskell. It is an independent open source community project which leverages the co

hasktorch 920 Jan 4, 2023
Runtime type annotations for the shape, dtype etc. of PyTorch Tensors.

torchtyping Type annotations for a tensor's shape, dtype, names, ... Turn this: def batch_outer_product(x: torch.Tensor, y: torch.Tensor) -> torch.Ten

Patrick Kidger 1.2k Jan 3, 2023
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX (.onnx, .pb, .pbtxt), Keras (.h5, .keras), Tens

Lutz Roeder 21k Jan 6, 2023
codes for Image Inpainting with External-internal Learning and Monochromic Bottleneck

Image Inpainting with External-internal Learning and Monochromic Bottleneck This repository is for the CVPR 2021 paper: 'Image Inpainting with Externa

null 97 Nov 29, 2022
PyTorch implementation of 1712.06087 "Zero-Shot" Super-Resolution using Deep Internal Learning

Unofficial PyTorch implementation of "Zero-Shot" Super-Resolution using Deep Internal Learning Unofficial Implementation of 1712.06087 "Zero-Shot" Sup

Jacob Gildenblat 196 Nov 27, 2022
Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation

Implicit Internal Video Inpainting Implementation for our ICCV2021 paper: Internal Video Inpainting by Implicit Long-range Propagation paper | project

null 202 Dec 30, 2022
Official Pytorch implementation of Meta Internal Learning

Official Pytorch implementation of Meta Internal Learning

null 10 Aug 24, 2022
This repo is for Self-Supervised Monocular Depth Estimation with Internal Feature Fusion(arXiv), BMVC2021

DIFFNet This repo is for Self-Supervised Monocular Depth Estimation with Internal Feature Fusion(arXiv), BMVC2021 A new backbone for self-supervised d

Hang 3 Oct 22, 2021
Visualizer using audio and semantic analysis to explore BigGAN (Brock et al., 2018) latent space.

BigGAN Audio Visualizer Description This visualizer explores BigGAN (Brock et al., 2018) latent space by using pitch/tempo of an audio file to generat

Rush Kapoor 2 Nov 21, 2022
A no-BS, dead-simple training visualizer for tf-keras

A no-BS, dead-simple training visualizer for tf-keras TrainingDashboard Plot inter-epoch and intra-epoch loss and metrics within a jupyter notebook wi

Vibhu Agrawal 3 May 28, 2021
OCTIS: Comparing Topic Models is Simple! A python package to optimize and evaluate topic models (accepted at EACL2021 demo track)

OCTIS : Optimizing and Comparing Topic Models is Simple! OCTIS (Optimizing and Comparing Topic models Is Simple) aims at training, analyzing and compa

MIND 478 Jan 1, 2023
XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale

XtremeDistilTransformers for Distilling Massive Multilingual Neural Networks ACL 2020 Microsoft Research [Paper] [Video] Releasing [XtremeDistilTransf

Microsoft 125 Jan 4, 2023
Facebook Research 605 Jan 2, 2023
This repository contains several image-to-image translation models, whcih were tested for RGB to NIR image generation. The models are Pix2Pix, Pix2PixHD, CycleGAN and PointWise.

RGB2NIR_Experimental This repository contains several image-to-image translation models, whcih were tested for RGB to NIR image generation. The models

null 5 Jan 4, 2023
Minimal diffusion models - Minimal code and simple experiments to play with Denoising Diffusion Probabilistic Models (DDPMs)

Minimal code and simple experiments to play with Denoising Diffusion Probabilist

Rithesh Kumar 16 Oct 6, 2022
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 5, 2022
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 5, 2022