ONNX Command-Line Toolbox

Overview

ONNX Command Line Toolbox

Build and Test CodeQL Sanity Coverage

  • Aims to improve your experience of investigating ONNX models.
  • Use it like onnx infershape /path/to/model.onnx. (See the usage section for more.)

Installation

Recommand to install via GitHub repo for the latest functionality.

pip install git+https://github.com/jackwish/onnxcli.git

Two alternative ways are:

  1. Install via pypi package pip install onnxcli
  2. Download and add the code tree to your $PYTHONPATH. This is for development purpose since the command line is different.
    git clone https://github.com/jackwish/onnxcli.git
    export PYTHONPATH=$(pwd)/onnxcli:${PYTHONPATH}
    python onnxcli/cli/dispatcher.py <more args>
    

The onnx draw requires dot command (graphviz) to be avaiable on your machine - which can be installed by command as below on Ubuntu/Debian.

sudo apt install -y graphviz

Usage

Once installed, the onnx and onnxcli commands are avaiable on your machine. You can play with commands such as onnx infershape /path/to/model.onnx. The general format is onnx <sub command> <dedicated arguments ...>. The sub commands are as sections below.

Check the online help with onnx --help and onnx <subcmd> --help for latest usage.

infershape

onnx infershape performs shape inference of the ONNX model. It's an CLI wrapper of onnx.shape_inference. You will find it useful to generate shape information for the models that are extracted by onnx extract.

extract

onnx extract extracts the sub model that is determined by the names of the input and output tensor of the subgraph from the original model. It's a CLI wrapper of onnx.utils.extract_model (which I authorized in the ONNX repo).

inspect

onnx inspect gives you quick view of the information of the given model. It's inspired by the tf-onnx tool.

When working on deep learning, you may like to take a look at what's inside the model. Netron is powerful but doesn't provide fine-grain view.

With onnx inspect, you no longer need to scroll the Netron window to look for nodes or tensors. Instead, you can dump the node attributes and tensor values with a single command.

Click here to see a node example

$ onnx inspect ./assets/tests/conv.float32.onnx --node --indices 0 --detail

Inpect of model ./assets/tests/conv.float32.onnx Graph name: 9 Graph inputs: 1 Graph outputs: 1 Nodes in total: 1 ValueInfo in total: 2 Initializers in total: 2 Sparse Initializers in total: 0 Quantization in total: 0

Node information: Node "output": type "Conv", inputs "['input', 'Variable/read', 'Conv2D_bias']", outputs "['output']" attributes: [name: "dilations" ints: 1 ints: 1 type: INTS , name: "group" i: 1 type: INT , name: "kernel_shape" ints: 3 ints: 3 type: INTS , name: "pads" ints: 1 ints: 1 ints: 1 ints: 1 type: INTS , name: "strides" ints: 1 ints: 1 type: INTS ]

Click here to see a tensor example

$ onnx inspect ./assets/tests/conv.float32.onnx --tensor --names Conv2D_bias --detail

Inpect of model ./assets/tests/conv.float32.onnx Graph name: 9 Graph inputs: 1 Graph outputs: 1 Nodes in total: 1 ValueInfo in total: 2 Initializers in total: 2 Sparse Initializers in total: 0 Quantization in total: 0

Tensor information: Initializer "Conv2D_bias": type FLOAT, shape [16], float data: [0.4517577290534973, -0.014192663133144379, 0.2946248948574066, -0.9742919206619263, -1.2975586652755737, 0.7223454117774963, 0.7835700511932373, 1.7674627304077148, 1.7242872714996338, 1.1230682134628296, -0.2902531623840332, 0.2627834975719452, 1.0175092220306396, 0.5643373131752014, -0.8244842290878296, 1.2169424295425415]

draw

onnx draw draws the graph in dot, svg, png formats. It gives you quick view of the type and shape of the tensors that are fed to a specific node. You can view the model topology in image viewer of browser without waiting for the model to load, which I found is really helpful for large models.

If you are viewing svg in browser, you can even quick search for the nodes and tensors. Together with onnx inspect, it will be very efficient to understand the issue you are looking into.

The node are in ellipses and tensors are in rectangles where the rounded ones are initializers. The node type of the node and the data type and shape of the tenors are also rendered. Here is a Convolution node example.

conv

Contributing

Welcome to contribute new commands or enhance them. Let's make our life easier together.

The workflow is pretty simple:

  1. Starting with GitHub Codespace or clone locally.
  • make setup to config the dependencies (or pip install -r ./requirements.txt if you prefer).
  1. Create a new subcommand
  • Starting by copying and modifying infershape.
  • Register the command in the dispatcher
  • Create a new command line test
  • make test to build and test.
  • make check and make format to fix any code style issues.
  1. Try out, debug, commit, push, and open pull request.
  • The code has been protected by CI. You need to get a pass before merging.
  • Ask if any questions.

License

Apache License Version 2.0.

Comments
  • Some ONNX models don't list activation tensors in GraphProto.value_info

    Some ONNX models don't list activation tensors in GraphProto.value_info

    They should, but they don't. I am not sure why such models behave like this - they cannot pass the ONNX model checker.

    There should be something wrong with the exporter. I can try to figure out which exporter has such issues.

    For onnxcli, any functionality depending on walking GraphProto.value_info may not show the real model. This is not our defect, but the models'. To workaround, you can firstly run shape inference on the model, and the GraphProto.value_info listing issue will be fixed.

    onnx infershape /path/to/input/model /path/to/output/model
    
    documentation 
    opened by zhenhuaw-me 2
  • Integrate the onnx dumper

    Integrate the onnx dumper

    src: https://github.com/onnx/tensorflow-onnx/blob/master/tools/dump-onnx.py

    most of them need to be renamed.

    • [x] inspect to check the model
    • [x] dump dot has high priotiry
    • [ ] print to std if no file specified
    opened by zhenhuaw-me 0
  • Optimizer reports

    Optimizer reports "Unresolved value references" since v0.3.0

    Via pipeline https://github.com/zhenhuaw-me/onnxcli/actions/runs/3453474851/jobs/5764096907.

    A simple model works no issue till optimizer v0.2.7 (verified locally), but starts to fail with optimizer v0.3.0 (verified locally) and still fail with v0.3.2 (the pipeline).

    It's onnx optimize ./assets/tests/conv.float32.onnx optimized.onnx.

    opened by zhenhuaw-me 2
  • Overwrite weights (initializers) with fixed data or random data

    Overwrite weights (initializers) with fixed data or random data

    Bert series ONNX models are very large (x GB) thus not easy to share the real file. We can improve this process by overwriting the weights (initializers)

    • It can be fixed data (e.g. all 0.1 or other value specified), thus the model can be compressed.
    • After sharing, we can recover with numpy style random numbers.

    This can only be used as a sharing method, the generated model are not useful when evaluate accuracy.

    For better usage:

    • Annotation will be added when writing fixed data, thus when re-random we can detect automatically.
    • The tensors can be specified with names or size.
    • Only works for FP32/FP16.
    • 0 removed.
    enhancement 
    opened by zhenhuaw-me 0
  • [draw] show tensor information on the edges

    [draw] show tensor information on the edges

    We currently draw tensors as boxes and operators as circles.

    image

    The graph will be complex if large model. We draw the tensor information on the edges and keep only operators as nodes.

    enhancement 
    opened by zhenhuaw-me 0
  • [infershape] should be able to set tensor shapes - inputs and others

    [infershape] should be able to set tensor shapes - inputs and others

    infershape is not very useful if the input shapes are symbolics (dynamic shapes). If the user can set input shapes, it's more powerful:

    • If set to static shapes, the shape of the model will be known.
    • Even for symbolics, the user can update the input shapes.

    The setup should be optional, and can extend to all the tensors in the model (excluding shape op related).

    Interface should be something like below.

    onnx infershape path/to/input/model.onnx path/to/output/model.onnx --tensor-shape t1:[d0,d1] t2:[d0,d1,d3]
    
    enhancement 
    opened by zhenhuaw-me 0
  • Extract should be able to skip the input tensor names

    Extract should be able to skip the input tensor names

    We should be able to walk the graph starting with the output tensor names and auto infer the input names if not given.

    It would be interesting to figure out if the user provided input tensor names and output tensor names don't cut a subgraph.

    enhancement 
    opened by zhenhuaw-me 0
Releases(v0.2.1)
  • v0.2.1(Nov 13, 2022)

    What's Changed

    • Ping onnxoptimizer to 0.2.7 due to "Unresolved value references" issue. See more in https://github.com/zhenhuaw-me/onnxcli/issues/28
    • convert: enable onnx to json by @zhenhuaw-me in https://github.com/zhenhuaw-me/onnxcli/pull/10
    • inspect: print input and output tensor too by @zhenhuaw-me in https://github.com/zhenhuaw-me/onnxcli/pull/12
    • inspect: dump input output tensor by @zhenhuaw-me in https://github.com/zhenhuaw-me/onnxcli/pull/14
    • inspect: show dimension name instead of value if has any by @zhenhuaw-me in https://github.com/zhenhuaw-me/onnxcli/pull/17
    • draw: gen tensor info for tensors that only have name by @zhenhuaw-me in https://github.com/zhenhuaw-me/onnxcli/pull/18
    • setup: install the dependent python packages by @zhenhuaw-me in https://github.com/zhenhuaw-me/onnxcli/pull/19
    • Check command by @zhenhuaw-me in https://github.com/zhenhuaw-me/onnxcli/pull/21

    Full Changelog: https://github.com/zhenhuaw-me/onnxcli/compare/v0.2.0...v0.2.1

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jan 8, 2022)

  • v0.1.0(Dec 24, 2021)

Owner
黎明灰烬 (王振华 Zhenhua WANG)
A b[i|y]te of ML.sys|Arch|VM.
黎明灰烬 (王振华 Zhenhua WANG)
An executor that loads ONNX models and embeds documents using the ONNX runtime.

ONNXEncoder An executor that loads ONNX models and embeds documents using the ONNX runtime. Usage via Docker image (recommended) from jina import Flow

Jina AI 2 Mar 15, 2022
ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

Ibai Gorordo 18 Nov 6, 2022
ONNX-PackNet-SfM: Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX

Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX

Ibai Gorordo 14 Dec 9, 2022
A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or simply to separate onnx files to any size you want.

sne4onnx A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or

Katsuya Hyodo 10 Aug 30, 2022
Simple ONNX operation generator. Simple Operation Generator for ONNX.

sog4onnx Simple ONNX operation generator. Simple Operation Generator for ONNX. https://github.com/PINTO0309/simple-onnx-processing-tools Key concept V

Katsuya Hyodo 6 May 15, 2022
A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for ONNX.

sam4onnx A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for

Katsuya Hyodo 6 May 15, 2022
Simple tool to combine(merge) onnx models. Simple Network Combine Tool for ONNX.

snc4onnx Simple tool to combine(merge) onnx models. Simple Network Combine Tool for ONNX. https://github.com/PINTO0309/simple-onnx-processing-tools 1.

Katsuya Hyodo 8 Oct 13, 2022
Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

scc4onnx Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel

Katsuya Hyodo 16 Dec 22, 2022
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.

MMdnn MMdnn is a comprehensive and cross-framework tool to convert, visualize and diagnose deep learning (DL) models. The "MM" stands for model manage

Microsoft 5.7k Jan 9, 2023
PyTorch ,ONNX and TensorRT implementation of YOLOv4

PyTorch ,ONNX and TensorRT implementation of YOLOv4

null 4.2k Jan 1, 2023
YOLOv5 in PyTorch > ONNX > CoreML > TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices evolved over thousands of hours of training and evolution on anonymized client datasets. All code and models are under active development, and are subject to modification or deletion without notice.

Ultralytics 34.1k Dec 31, 2022
A code generator from ONNX to PyTorch code

onnx-pytorch Generating pytorch code from ONNX. Currently support onnx==1.9.0 and torch==1.8.1. Installation From PyPI pip install onnx-pytorch From

Wenhao Hu 94 Jan 6, 2023
tf2onnx - Convert TensorFlow, Keras and Tflite models to ONNX.

tf2onnx converts TensorFlow (tf-1.x or tf-2.x), tf.keras and tflite models to ONNX via command line or python api.

Open Neural Network Exchange 1.8k Jan 8, 2023
Export CenterPoint PonintPillars ONNX Model For TensorRT

CenterPoint-PonintPillars Pytroch model convert to ONNX and TensorRT Welcome to CenterPoint! This project is fork from tianweiy/CenterPoint. I impleme

CarkusL 149 Dec 13, 2022
A high-performance anchor-free YOLO. Exceeding yolov3~v5 with ONNX, TensorRT, NCNN, and Openvino supported.

YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities. For more details, please refer to our report on Arxiv.

null 7.7k Jan 6, 2023
YOLOv3 in PyTorch > ONNX > CoreML > TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices

Ultralytics 9.3k Jan 7, 2023
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with ONNX, TensorRT, ncnn, and OpenVINO supported.

Introduction YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and ind

null 7.7k Jan 3, 2023
Convert Apple NeuralHash model for CSAM Detection to ONNX.

Apple NeuralHash is a perceptual hashing method for images based on neural networks. It can tolerate image resize and compression.

Asuhariet Ygvar 1.5k Dec 31, 2022
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Peter Lin 6.5k Jan 4, 2023