Visualizer for neural network, deep learning, and machine learning models

Overview

Netron is a viewer for neural network, deep learning and machine learning models.

Netron supports ONNX (.onnx, .pb, .pbtxt), Keras (.h5, .keras), TensorFlow Lite (.tflite), Caffe (.caffemodel, .prototxt), Darknet (.cfg), Core ML (.mlmodel), MNN (.mnn), MXNet (.model, -symbol.json), ncnn (.param), PaddlePaddle (.zip, __model__), Caffe2 (predict_net.pb), Barracuda (.nn), Tengine (.tmfile), TNN (.tnnproto), RKNN (.rknn), MindSpore Lite (.ms), UFF (.uff).

Netron has experimental support for TensorFlow (.pb, .meta, .pbtxt, .ckpt, .index), PyTorch (.pt, .pth), TorchScript (.pt, .pth), OpenVINO (.xml), Torch (.t7), Arm NN (.armnn), BigDL (.bigdl, .model), Chainer (.npz, .h5), CNTK (.model, .cntk), Deeplearning4j (.zip), MediaPipe (.pbtxt), ML.NET (.zip), scikit-learn (.pkl), TensorFlow.js (model.json, .pb).

Install

macOS: Download the .dmg file or run brew install netron

Linux: Download the .AppImage file or run snap install netron

Windows: Download the .exe installer or run winget install netron

Browser: Start the browser version.

Python Server: Run pip install netron and netron [FILE] or netron.start('[FILE]').

Models

Sample model files to download or open using the browser version:

Comments
  • Windows app not closing properly

    Windows app not closing properly

    After the latest update, Netron remains open taking up memory and CPU after closing the program. I must close it through task manager each time. I am on Windows 10

    no repro 
    opened by idenc 22
  • TorchScript: ValueError: not enough values to unpack

    TorchScript: ValueError: not enough values to unpack

    • Netron app and version: web app 5.5.9?
    • OS and browser version: Manjaro GNOME on firefox 97.0.1

    Steps to Reproduce:

    1. use torch.broadcast_tensors
    2. export with torch.trace(...).save()
    3. open in netron.app

    I have also gotten a Unsupported function 'torch.broadcast_tensors', but have been unable to reproduce it due to this current error. Most likely, the fix for the following repro will cover two bugs.

    Please attach or link model files to reproduce the issue if necessary.

    image

    Repro:

    import torch
    
    class Test(torch.nn.Module):
        def forward(self, a, b):
            a, b = torch.broadcast_tensors(a, b)
            assert a.shape == b.shape == (3, 5)
            return a + b
    
    torch.jit.trace(
        Test(),
        (torch.ones(3, 1), torch.ones(1, 5)),
    ).save("foobar.pt")
    

    Zipped foobar.pt: foobar.zip

    help wanted bug 
    opened by pbsds 15
  • OpenVINO support

    OpenVINO support

    • [x] 1. Opening rm_lstm4f.xml results in TypeError (#192)
    • [x] 2. dot files are not opened any more - need to fix it (#192)
    • [x] 3. add preflight check for invalid xml and dot content
    • [x] 6. Add test files to ./test/models.json (#195) (#211)
    • [x] 9. Add support for the version 3 of IR (#196)
    • [x] 10. Category color support (#203)
    • [x] 11. -metadata.json for coloring, documentation and attribute default filtering (#203).
    • [x] 5. Filter attribute defaults based on -metadata.json to show fewer attributes in the graph
    • [ ] 7. Show weight tensors
    • [x] 8. Graph inputs and outputs should be exposed as Graph.inputs and Graph.outputs
    • [x] 12. Move to DOMParser
    • [x] 13. Remove dot support
    feature 
    opened by lutzroeder 15
  • RangeError: Maximum call stack size exceeded

    RangeError: Maximum call stack size exceeded

    • Netron app and version: 4.4.8 App and Browser
    • OS and browser version: Windows 10 + Chrome Version 84.0.4147.135

    Steps to Reproduce:

    EfficientDet-d0.zip

    Please attach or link model files to reproduce the issue if necessary.

    help wanted no repro bug 
    opened by ryusaeba 14
  • Debugging Tensorflow Lite Model

    Debugging Tensorflow Lite Model

    Hi there,

    First off, just wanted to say thanks for creating such a great tool - Netron is very useful.

    I'm having an issue that likely stems from Tensorflow, rather than from Netron, but thought you might have some insights. In my flow, I use TF 1.15 to go from .ckpt --> frozen .pb --> .tflite. Normally it works reasonably smoothly, but a recent run shows an issue with the .tflite file: it is created without errors, it runs, but it performs poorly. Opening it with Netron shows that the activation functions (relu6 in this case) have been removed for every layer. Opening the equivalent .pb file in Netron shows the relu6 functions are present.

    Have you seen any cases in which Netron struggled with a TF Lite model (perhaps it can open, but isn't displaying correctly)? Also, how did you figure out the format for .tflite files (perhaps knowing this would allow me to debug it more deeply)?

    Thanks in advance.

    no repro 
    opened by mm7721 12
  • add armnn serialized format support

    add armnn serialized format support

    here's patch to support armnn format. (experimental)

    armnn-schema.js is compiled from ArmnnSchema.fbs included in armNN serailizer.

    see also:

    armnn: https://github.com/ARM-software/armnn

    As mensioned in #363, I will check items in below:

    • [x] Add sample files to test/models.json and run node test/test.js armnn
    • [x] Add tools/armnn script and sync, schema to automate regenerating armnn-schema.js
    • [x] Add tools/armnn script to run as part of ./Makefile
    • [x] Run make lint
    opened by Tee0125 12
  • TorchScript: Argument names to match runtime

    TorchScript: Argument names to match runtime

    Hi, there is some questions about node's name which in pt model saved by TorchScript. I use netron to view my pt model exported by torch.jit.save(),but the node's name doesn't match with it's real name resolved by TorchScript interface. It looks like the names in pt are arranged numerically from smallest to largest,but this is clearly not the case when they are parsed from TorchScript's interface. I wonder how this kind of situation can be solved, thanks a lot !! Looking forward to your reply.

    help wanted 
    opened by daodaoawaker 11
  • Support torch.fx IR visualization using netron

    Support torch.fx IR visualization using netron

    torch.fx is a library in PyTorch 1.8 that allows python-python model transformations. It works by symbolically tracing the PyTorch model into a graph (fx.GraphModule), which can be transformed and finally exported back to code, or used as a nn.Module directly. Currently there is no mechanism to import the graph IR into netron. An indirect path is to export to ONNX to visualize, which is not as useful if debugging transformations that potentially break ONNX exportability. It seems valuable to visualize the traced graph directly in netron.

    feature help wanted no repro 
    opened by sjain-stanford 11
  • TorchScript unsupported functions in after update

    TorchScript unsupported functions in after update

    I have a lot of basic model files saved in TorchScript and they were able to be opened weeks ago. However I cannot many of them after update Netron to v3.9.1. Many common functions are not supported not, e.g. torch.constant_pad_nd, torch.bmm, torch.avg_pool3d.

    opened by lujq96 11
  • OpenVINO IR v10 LSTM support

    OpenVINO IR v10 LSTM support

    • Netron app and version: 4.4.4
    • OS and browser version: Windows 10 64bit

    Steps to Reproduce:

    1. Open OpenVINO IR XML file in netron

    Please attach or link model files to reproduce the issue if necessary.

    I cannot share the proprietary model that shows dozens of disconnected nodes, but the one linked below does show disconnected subgraphs after conversion to OpenVINO IR. Note that the IR generated using the --generate_deprecated_IR_V7 option displays correctly.

    https://github.com/ARM-software/ML-KWS-for-MCU/blob/master/Pretrained_models/Basic_LSTM/Basic_LSTM_S.pb

    Convert using:

    python 'C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo.py' --input_model .\Basic_LSTM_S.pb --input=Reshape:0 --input_shape=[1,490] --output=Output-Layer/add

    This results in the following disconnected graph display:

    image

    no repro external bug 
    opened by mdeisher 10
  • Full support for scikit-learn (joblib)

    Full support for scikit-learn (joblib)

    For recoverable estimator persistence scikit-learn recommends to use joblib (instead of pickle). Sidenote: It is possible to export trained models into ONNX or PMML but the estimators are not recoverable. For more info refer to here.

    bug 
    opened by fkromer 9
  • Export full size image

    Export full size image

    I have onnx file successfully exported from mmsegmentation (swin-transformer), huge model (975.4) MB, I managed to open it in netron, however when I try to export it and preview in full size its blured.

    Any way I can fix it ? Thanks

    no repro bug 
    opened by adrianodac 0
  • TorchScript: torch.jit.mobile.serialization support

    TorchScript: torch.jit.mobile.serialization support

    Export PyTorch model to FlatBuffers file:

    import torch
    import torchvision
    model = torchvision.models.resnet34(weights=torchvision.models.ResNet34_Weights.DEFAULT)
    torch.jit.save_jit_module_to_flatbuffer(torch.jit.script(model), 'resnet34.ff')
    

    Sample files: scriptmodule.ff.zip squeezenet1_1_traced.ff.zip

    feature 
    opened by lutzroeder 0
  • MegEngine: fix some bugs

    MegEngine: fix some bugs

    fix some bugs of megengine C++ model (.mge) visualization:

    1. show the shape of the middle tensor;
    2. fix scope matching model identifier (mgv2) due to possible leading information;

    please help review, thanks~

    opened by Ysllllll 0
  • TorchScript server

    TorchScript server

    import torch
    import torchvision
    import torch.utils.tensorboard
    model = torchvision.models.detection.fasterrcnn_resnet50_fpn()
    script = torch.jit.script(model)
    script.save('fasterrcnn_resnet50_fpn.pt')
    with torch.utils.tensorboard.SummaryWriter('log') as writer:
        writer.add_graph(script, ())
    

    fasterrcnn_resnet50_fpn.pt.zip

    feature 
    opened by lutzroeder 0
Owner
Lutz Roeder
Lutz Roeder
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Erik Linder-Norén 21.8k Jan 9, 2023
Visualizer using audio and semantic analysis to explore BigGAN (Brock et al., 2018) latent space.

BigGAN Audio Visualizer Description This visualizer explores BigGAN (Brock et al., 2018) latent space by using pitch/tempo of an audio file to generat

Rush Kapoor 2 Nov 21, 2022
A no-BS, dead-simple training visualizer for tf-keras

A no-BS, dead-simple training visualizer for tf-keras TrainingDashboard Plot inter-epoch and intra-epoch loss and metrics within a jupyter notebook wi

Vibhu Agrawal 3 May 28, 2021
Deep learning (neural network) based remote photoplethysmography: how to extract pulse signal from video using deep learning tools

Deep-rPPG: Camera-based pulse estimation using deep learning tools Deep learning (neural network) based remote photoplethysmography: how to extract pu

Terbe Dániel 138 Dec 17, 2022
XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale

XtremeDistilTransformers for Distilling Massive Multilingual Neural Networks ACL 2020 Microsoft Research [Paper] [Video] Releasing [XtremeDistilTransf

Microsoft 125 Jan 4, 2023
This is a model made out of Neural Network specifically a Convolutional Neural Network model

This is a model made out of Neural Network specifically a Convolutional Neural Network model. This was done with a pre-built dataset from the tensorflow and keras packages. There are other alternative libraries that can be used for this purpose, one of which is the PyTorch library.

null 9 Oct 18, 2022
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayesian-Torch is designed to be flexible and seamless in extending a deterministic deep neural network architecture to corresponding Bayesian form by simply replacing the deterministic layers with Bayesian layers.

Intel Labs 210 Jan 4, 2023
A collection of easy-to-use, ready-to-use, interesting deep neural network models

Interesting and reproducible research works should be conserved. This repository wraps a collection of deep neural network models into a simple and un

Aria Ghora Prabono 16 Jun 16, 2022
Scripts of Machine Learning Algorithms from Scratch. Implementations of machine learning models and algorithms using nothing but NumPy with a focus on accessibility. Aims to cover everything from basic to advance.

Algo-ScriptML Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The goal of this project is not t

Algo Phantoms 81 Nov 26, 2022
This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Developed By Google!

Machine Learning Hand Detector This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Dev

Popstar Idhant 3 Feb 25, 2022
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

DeepMind 892 Dec 28, 2022
Machine learning and Deep learning models, deploy on telegram (the best social media)

Semi Intelligent BOT The project involves : Classifying fake news Classifying objects such as aeroplane, automobile, bird, cat, deer, dog, frog, horse

MohammadReza Norouzi 5 Mar 6, 2022
Scripts for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation and a convolutional neural network (CNN) for image classification

About subwAI subwAI - a project for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation

null 82 Jan 1, 2023
Code repo for "RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network" (Machine Learning and the Physical Sciences workshop in NeurIPS 2021).

RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network An official PyTorch implementation of the RBSRICNN network as desc

Rao Muhammad Umer 6 Nov 14, 2022
BasicNeuralNetwork - This project looks over the basic structure of a neural network and how machine learning training algorithms work

BasicNeuralNetwork - This project looks over the basic structure of a neural network and how machine learning training algorithms work. For this project, I used the sigmoid function as an activation function along with stochastic gradient descent to adjust the weights and biases.

Manas Bommakanti 1 Jan 22, 2022
Time-series-deep-learning - Developing Deep learning LSTM, BiLSTM models, and NeuralProphet for multi-step time-series forecasting of stock price.

Stock Price Prediction Using Deep Learning Univariate Time Series Predicting stock price using historical data of a company using Neural networks for

Abdultawwab Safarji 7 Nov 27, 2022
Codes and models for the paper "Learning Unknown from Correlations: Graph Neural Network for Inter-novel-protein Interaction Prediction".

GNN_PPI Codes and models for the paper "Learning Unknown from Correlations: Graph Neural Network for Inter-novel-protein Interaction Prediction". Lear

Ursa Zrimsek 2 Dec 14, 2022
We present a framework for training multi-modal deep learning models on unlabelled video data by forcing the network to learn invariances to transformations applied to both the audio and video streams.

Multi-Modal Self-Supervision using GDT and StiCa This is an official pytorch implementation of papers: Multi-modal Self-Supervision from Generalized D

Facebook Research 42 Dec 9, 2022