tf2onnx - Convert TensorFlow, Keras and Tflite models to ONNX.

Overview

tf2onnx - Convert TensorFlow, Keras and Tflite models to ONNX.

tf2onnx converts TensorFlow (tf-1.x or tf-2.x), tf.keras and tflite models to ONNX via command line or python api.

Note: after tf2onnx-1.8.3 we made a change that impacts the output names for the ONNX model. Instead of taking the output names from the tensorflow graph (ie. for keras models this is frequently Identity:0) we decided that it is better to use the structured output names of the model so the output names are now identical to the names in the keras or saved model.

TensorFlow has many more ops than ONNX and occasionally mapping a model to ONNX creates issues.

You find a list of supported Tensorflow ops and their mapping to ONNX here.

The common issues we run into we try to document here Troubleshooting Guide.


Build Type OS Python Tensorflow Onnx opset Status
Unit Test - Basic Linux, MacOS*, Windows* 3.6, 3.7, 3.8 1.12-1.15, 2.1-2.5 7-13 Build Status
Unit Test - Full Linux, MacOS, Windows 3.6, 3.7, 3.8 1.12-1.15, 2.1-2.5 7-13 Build Status

Supported Versions

ONNX

tf2onnx will use the ONNX version installed on your system and installs the latest ONNX version if none is found.

We support ONNX opset-6 to opset-13. By default we use opset-9 for the resulting ONNX graph since most runtimes will support opset-9.

If you want the graph to be generated with a specific opset, use --opset in the command line, for example --opset 13.

TensorFlow

We support tf-1.x graphs and tf-2. To keep our test matrix manageable we test tf2onnx running on top of tf-1.12 and up.

When running under tf-2.x tf2onnx will use the tensorflow V2 controlflow.

You can install tf2onnx on top of tf-1.x or tf-2.x.

Python

We support Python 3.6, 3.7 and 3.8.

Prerequisites

Install TensorFlow

If you don't have TensorFlow installed already, install the desired TensorFlow build, for example:

pip install tensorflow

(Optional) Install runtime

If you want to run tests, install a runtime that can run ONNX models. For example:

ONNX Runtime (available for Linux, Windows, and Mac):

pip install onnxruntime

Installation

Install from pypi

pip install -U tf2onnx

Install latest from github

pip install git+https://github.com/onnx/tensorflow-onnx

Build and install latest from source (for development)

git clone https://github.com/onnx/tensorflow-onnx

Once dependencies are installed, from the tensorflow-onnx folder call:

python setup.py install

or

python setup.py develop

tensorflow-onnx requires onnx-1.5 or better and will install/upgrade onnx if needed.

To create a wheel for distribution:

python setup.py bdist_wheel

Getting started

To get started with tensorflow-onnx, run the t2onnx.convert command, providing:

  • the path to your TensorFlow model (where the model is in saved model format)
  • a name for the ONNX output file:

python -m tf2onnx.convert --saved-model tensorflow-model-path --output model.onnx

The above command uses a default of 9 for the ONNX opset. If you need a newer opset, or want to limit your model to use an older opset then you can provide the --opset argument to the command. If you are unsure about which opset to use, refer to the ONNX operator documentation.

python -m tf2onnx.convert --saved-model tensorflow-model-path --opset 13 --output model.onnx

If your TensorFlow model is in a format other than saved model, then you need to provide the inputs and outputs of the model graph.

For checkpoint format:

python -m tf2onnx.convert --checkpoint tensorflow-model-meta-file-path --output model.onnx --inputs input0:0,input1:0 --outputs output0:0

For graphdef format:

python -m tf2onnx.convert --graphdef tensorflow-model-graphdef-file --output model.onnx --inputs input0:0,input1:0 --outputs output0:0

If your model is in checkpoint or graphdef format and you do not know the input and output nodes of the model, you can use the summarize_graph TensorFlow utility. The summarize_graph tool does need to be downloaded and built from source. If you have the option of going to your model provider and obtaining the model in saved model format, then we recommend doing so.

You find an end-to-end tutorial for ssd-mobilenet here

We recently added support for tflite. You convert tflite models via command line, for example:

python -m tf2onnx.convert --opset 13 --tflite tflite--file --output model.onnx

CLI reference

python -m tf2onnx.convert
    --saved-model SOURCE_SAVED_MODEL_PATH |
    --checkpoint SOURCE_CHECKPOINT_METAFILE_PATH |
    --tflite SOURCE_TFLITE_PATH |
    --input | --graphdef SOURCE_GRAPHDEF_PB
    --output TARGET_ONNX_MODEL
    [--inputs GRAPH_INPUTS]
    [--outputs GRAPH_OUTPUS]
    [--inputs-as-nchw inputs_provided_as_nchw]
    [--opset OPSET]
    [--dequantize]
    [--tag TAG]
    [--signature_def SIGNATURE_DEF]
    [--concrete_function CONCRETE_FUNCTION]
    [--target TARGET]
    [--custom-ops list-of-custom-ops]
    [--fold_const]
    [--large_model]
    [--continue_on_error]
    [--verbose]
    [--output_frozen_graph]

Parameters

--saved-model

TensorFlow model as saved_model. We expect the path to the saved_model directory.

--checkpoint

TensorFlow model as checkpoint. We expect the path to the .meta file.

--tflite

Convert a tflite model by providing a path to the .tflite file. Inputs/outputs do not need to be specified.

--input or --graphdef

TensorFlow model as graphdef file.

--output

The target onnx file path.

--inputs, --outputs

TensorFlow model's input/output names, which can be found with summarize graph tool. Those names typically end with :0, for example --inputs input0:0,input1:0. Inputs and outputs are not needed for models in saved-model format. Some models specify placeholders with unknown ranks and dims which can not be mapped to onnx. In those cases one can add the shape after the input name inside [], for example --inputs X:0[1,28,28,3]. Use -1 to indicate unknown dimensions.

--inputs-as-nchw

By default we preserve the image format of inputs (nchw or nhwc) as given in the TensorFlow model. If your hosts (for example windows) native format nchw and the model is written for nhwc, --inputs-as-nchw tensorflow-onnx will transpose the input. Doing so is convenient for the application and the converter in many cases can optimize the transpose away. For example --inputs input0:0,input1:0 --inputs-as-nchw input0:0 assumes that images are passed into input0:0 as nchw while the TensorFlow model given uses nhwc.

--ignore_default, --use_default

ONNX requires default values for graph inputs to be constant, while Tensorflow's PlaceholderWithDefault op accepts computed defaults. To convert such models, pass a comma-separated list of node names to the ignore_default and/or use_default flags. PlaceholderWithDefault nodes with matching names will be replaced with Placeholder or Identity ops, respectively.

--opset

By default we use the opset 9 to generate the graph. By specifying --opset the user can override the default to generate a graph with the desired opset. For example --opset 13 would create a onnx graph that uses only ops available in opset 13. Because older opsets have in most cases fewer ops, some models might not convert on a older opset.

--dequantize

(This is experimental, only supported for tflite)

Produces a float32 model from a quantized tflite model. Detects ReLU and ReLU6 ops from quantization bounds.

--tag

Only valid with parameter --saved_model. Specifies the tag in the saved_model to be used. Typical value is 'serve'.

--signature_def

Only valid with parameter --saved_model. Specifies which signature to use within the specified --tag value. Typical value is 'serving_default'.

--concrete_function

(This is experimental, valid only for TF2.x models)

Only valid with parameter --saved_model. If a model contains a list of concrete functions, under the function name __call__ (as can be viewed using the command saved_model_cli show --all), this parameter is a 0-based integer specifying which function in that list should be converted. This parameter takes priority over --signature_def, which will be ignored.

--large_model

(Can be used only for TF2.x models)

Only valid with parameter --saved_model. When set, creates a zip file containing the ONNX protobuf model and large tensor values stored externally. This allows for converting models that exceed the 2 GB protobuf limit.

--output_frozen_graph

Saves the frozen and optimize tensorflow graph to file.

--custom-ops

If a model contains ops not recognized by onnx runtime, you can tag these ops with a custom op domain so that the runtime can still open the model. The format is a comma-separated map of tf op names to domains in the format OpName:domain. If only an op name is provided (no colon), the default domain of ai.onnx.converters.tensorflow will be used.

--target

Some models require special handling to run on some runtimes. In particular, the model may use unsupported data types. Workarounds are activated with --target TARGET. Currently supported values are listed on this wiki. If your model will be run on Windows ML, you should specify the appropriate target value.

--fold_const

Deprecated.

Tool to get Graph Inputs & Outputs

To find the inputs and outputs for the TensorFlow graph the model developer will know or you can consult TensorFlow's summarize_graph tool, for example:

summarize_graph --in_graph=tests/models/fc-layers/frozen.pb

Testing

There are 2 types of tests.

Unit test

python setup.py test

Validate pre-trained TensorFlow models

python tests/run_pretrained_models.py
usage: run_pretrained_models.py [-h] [--cache CACHE] [--tests TESTS] [--backend BACKEND] [--verbose] [--debug] [--config yaml-config]

optional arguments:
  -h, --help         show this help message and exit
  --cache CACHE      pre-trained models cache dir
  --tests TESTS      tests to run
  --backend BACKEND  backend to use
  --config           yaml config file
  --verbose          verbose output, option is additive
  --opset OPSET      target opset to use
  --perf csv-file    capture performance numbers for tensorflow and onnx runtime
  --debug            dump generated graph with shape info
  --fold_const when set, TensorFlow fold_constants transformation will be applied before conversion. This will benefit features including Transpose optimization (e.g. Transpose operations introduced during tf-graph-to-onnx-graph conversion will be removed), and RNN unit conversion (for example LSTM).

run_pretrained_models.py will run the TensorFlow model, captures the TensorFlow output and runs the same test against the specified ONNX backend after converting the model.

If the option --perf csv-file is specified, we'll capture the timeing for inferece of tensorflow and onnx runtime and write the result into the given csv file.

You call it for example with:

python tests/run_pretrained_models.py --backend onnxruntime --config tests/run_pretrained_models.yaml --perf perf.csv

Tool to save pre-trained model

We provide an utility to save pre-trained model along with its config. Put save_pretrained_model(sess, outputs, feed_inputs, save_dir, model_name) in your last testing epoch and the pre-trained model and config will be saved under save_dir/to_onnx. Please refer to the example in tools/save_pretrained_model.py for more information. Note the minimum required Tensorflow version is r1.6.

Python API Reference

With tf2onnx-1.8.4 we updated our API. Our old API still works - you find the documentation here.

from_keras (tf-2.0 and newer)

import tf2onnx

model_proto, external_tensor_storage = tf2onnx.convert.from_keras(model,
                input_signature=None, opset=None, custom_ops=None,
                custom_op_handlers=None, custom_rewriter=None,
                inputs_as_nchw=None, extra_opset=None shape_override=None,
                target=None, large_model=False, output_path=None)

    Args:
        model: the tf.keras model we want to convert
        input_signature: a tf.TensorSpec or a numpy array defining the shape/dtype of the input
        opset: the opset to be used for the ONNX model, default is the latest
        custom_ops: if a model contains ops not recognized by onnx runtime,
            you can tag these ops with a custom op domain so that the
            runtime can still open the model. Type is a dictionary `{op name: domain}`.
        target: list of workarounds applied to help certain platforms
        custom_op_handlers: dictionary of custom ops handlers
        custom_rewriter: list of custom graph rewriters
        extra_opset: list of extra opset's, for example the opset's used by custom ops
        shape_override: dict with inputs that override the shapes given by tensorflow
        inputs_as_nchw: transpose inputs in list from nchw to nhwc
        large_model: use the ONNX external tensor storage format
        output_path: save model to output_path

    Returns:
        An ONNX model_proto and an external_tensor_storage dict.

See tutorials/keras-resnet50.ipynb for an end to end example.

from_function (tf-2.0 and newer)

import tf2onnx

model_proto, external_tensor_storage = tf2onnx.convert.from_function(function,
                input_signature=None, opset=None, custom_ops=None,
                custom_op_handlers=None, custom_rewriter=None,
                inputs_as_nchw=None, extra_opset=None, shape_override=None,
                target=None, large_model=False, output_path=None)

    Args:
        function: the tf.function we want to convert
        input_signature: a tf.TensorSpec or a numpy array defining the shape/dtype of the input
        opset: the opset to be used for the ONNX model, default is the latest
        custom_ops: if a model contains ops not recognized by onnx runtime,
            you can tag these ops with a custom op domain so that the
            runtime can still open the model. Type is a dictionary `{op name: domain}`.
        target: list of workarounds applied to help certain platforms
        custom_op_handlers: dictionary of custom ops handlers
        custom_rewriter: list of custom graph rewriters
        extra_opset: list of extra opset's, for example the opset's used by custom ops
        shape_override: dict with inputs that override the shapes given by tensorflow
        inputs_as_nchw: transpose inputs in list from nchw to nhwc
        large_model: use the ONNX external tensor storage format
        output_path: save model to output_path

    Returns:
        An ONNX model_proto and an external_tensor_storage dict.

from_graph_def

import tf2onnx

model_proto, external_tensor_storage = tf2onnx.convert.from_graph_def(graph_def,
                name=None, input_names=None, output_names=None, opset=None,
                custom_ops=None, custom_op_handlers=None, custom_rewriter=None, 
                inputs_as_nchw=None, extra_opset=None,
                shape_override=None, target=None, large_model=False,
                output_path=None)

    Args:
        graph_def: the graph_def we want to convert
        input_names: list of input names
        output_names: list of output names
        name: A name for the graph
        opset: the opset to be used for the ONNX model, default is the latest
        target: list of workarounds applied to help certain platforms
        custom_op_handlers: dictionary of custom ops handlers
        custom_rewriter: list of custom graph rewriters
        extra_opset: list of extra opset's, for example the opset's used by custom ops
        shape_override: dict with inputs that override the shapes given by tensorflow
        inputs_as_nchw: transpose inputs in list from nchw to nhwc
        large_model: use the ONNX external tensor storage format
        output_path: save model to output_path

    Returns:
        An ONNX model_proto and an external_tensor_storage dict.

Creating custom op mappings from python

For complex custom ops that require graph rewrites or input / attribute rewrites using the python interface to insert a custom op will be the easiest way to accomplish the task. A dictionary of name->custom_op_handler can be passed to tf2onnx.tfonnx.process_tf_graph. If the op name is found in the graph the handler will have access to all internal structures and can rewrite that is needed. For example examples/custom_op_via_python.py:

import tensorflow as tf
import tf2onnx
from onnx import helper

_TENSORFLOW_DOMAIN = "ai.onnx.converters.tensorflow"


def print_handler(ctx, node, name, args):
    # replace tf.Print() with Identity
    #   T output = Print(T input, data, @list(type) U, @string message, @int first_n, @int summarize)
    # becomes:
    #   T output = Identity(T Input)
    node.domain = _TENSORFLOW_DOMAIN
    del node.input[1:]
    return node


with tf.Session() as sess:
    x = tf.placeholder(tf.float32, [2, 3], name="input")
    x_ = tf.add(x, x)
    x_ = tf.Print(x, [x], "hello")
    _ = tf.identity(x_, name="output")
    onnx_graph = tf2onnx.tfonnx.process_tf_graph(sess.graph,
                                                 custom_op_handlers={"Print": (print_handler, ["Identity", "mode"])},
                                                 extra_opset=[helper.make_opsetid(_TENSORFLOW_DOMAIN, 1)],
                                                 input_names=["input:0"],
                                                 output_names=["output:0"])
    model_proto = onnx_graph.make_model("test")
    with open("/tmp/model.onnx", "wb") as f:
        f.write(model_proto.SerializeToString())

How tf2onnx works

The converter needs to take care of a few things:

  1. Convert the protobuf format. Since the format is similar this step is straight forward.
  2. TensorFlow types need to be mapped to their ONNX equivalent.
  3. For many ops TensorFlow passes parameters like shapes as inputs where ONNX wants to see them as attributes. Since we use a frozen graph, the converter will fetch the input as constant, converts it to an attribute and remove the original input.
  4. TensorFlow in many cases composes ops out of multiple simpler ops. The converter will need to identify the subgraph for such ops, slice the subgraph out and replace it with the ONNX equivalent. This can become fairly complex so we use a graph matching library for it. A good example of this is the tensorflow transpose op.
  5. TensorFlow's default data format is NHWC where ONNX requires NCHW. The converter will insert transpose ops to deal with this.
  6. There are some ops like relu6 that are not supported in ONNX but the converter can be composed out of other ONNX ops.
  7. ONNX backends are new and their implementations are not complete yet. For some ops the converter generate ops with deal with issues in existing backends.

Step 1 - start with a frozen graph

tf2onnx starts with a frozen graph. This is because of item 3 above.

Step 2 - 1:1 conversion of the protobuf from tensorflow to onnx

tf2onnx first does a simple conversion from the TensorFlow protobuf format to the ONNX protobuf format without looking at individual ops. We do this so we can use the ONNX graph as internal representation and write helper functions around it. The code that does the conversion is in tensorflow_to_onnx(). tensorflow_to_onnx() will return the ONNX graph and a dictionary with shape information from TensorFlow. The shape information is helpful in some cases when processing individual ops. The ONNX graph is wrapped in a Graph object and nodes in the graph are wrapped in a Node object to allow easier graph manipulations on the graph. All code that deals with nodes and graphs is in graph.py.

Step 3 - rewrite subgraphs

In the next step we apply graph matching code on the graph to re-write subgraphs for ops like transpose and lstm. For an example looks at rewrite_transpose().

Step 4 - process individual ops

In the fourth step we look at individual ops that need attention. The dictionary _OPS_MAPPING will map tensorflow op types to a method that is used to process the op. The simplest case is direct_op() where the op can be taken as is. Whenever possible we try to group ops into common processing, for example all ops that require dealing with broadcasting are mapped to broadcast_op(). For an op that composes the tensorflow op from multiple onnx ops, see relu6_op().

Step 5 - optimize the functional ONNX graph

We than try to optimize the functional ONNX graph. For example we remove ops that are not needed, remove transposes as much as possible, de-dupe constants, fuse ops whenever possible, ...

Step 6 - final processing

Once all ops are converted and optimize, we need to do a topological sort since ONNX requires it. process_tf_graph() is the method that takes care of all above steps.

Extending tf2onnx

If you like to contribute and add new conversions to tf2onnx, the process is something like:

  1. See if the op fits into one of the existing mappings. If so adding it to _OPS_MAPPING is all that is needed.
  2. If the new op needs extra processing, start a new mapping function.
  3. If the tensorflow op is composed of multiple ops, consider using a graph re-write. While this might be a little harder initially, it works better for complex patterns.
  4. Add a unit test in tests/test_backend.py. The unit tests mostly create the tensorflow graph, run it and capture the output, than convert to onnx, run against a onnx backend and compare tensorflow and onnx results.
  5. If there are pre-trained models that use the new op, consider adding those to test/run_pretrained_models.py.

License

Apache License v2.0

Comments
  • [ONNXRuntimeError] : 10 : INVALID_GRAPH : This is an invalid model. Error in Node:model/multi_category_encoding/AsString : No Op registered for AsString with domain_version of 9

    [ONNXRuntimeError] : 10 : INVALID_GRAPH : This is an invalid model. Error in Node:model/multi_category_encoding/AsString : No Op registered for AsString with domain_version of 9

    Below code works perfect when run in python file (python==3.9.5, tensorflow==2.5.0, keras2onnx==1.7.0, onnxruntime==1.8.0, keras==2.4.3, tf2onnx==1.9.1)

    autoKeras_model = StructuredDataClassifier(max_trials=MaxTrials)
    autoKeras_model.fit(x=X_train, y=y_train, validation_data=(X_valid, y_valid), epochs=Epochs, verbose=1)
    ExportedautoKeras_model = autoKeras_model.export_model()
    
    onnx_model, _ = tf2onnx.convert.from_keras(ExportedautoKeras_model )
    content = onnx_model.SerializeToString()
    sess = onnxruntime.InferenceSession(content)
    

    Same code inside Flask App, InferenceSession throws error

    sess = onnxruntime.InferenceSession(content)
    
      File "C:\Users\plg\Anaconda3\envs\automl04augpy395elk7120\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 283, in __init__
        self._create_inference_session(providers, provider_options, disabled_optimizers)
      File "C:\Users\plg\Anaconda3\envs\automl04augpy395elk7120\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 312, in _create_inference_session
        sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
    onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : This is an invalid model. Error in Node:model/multi_category_encoding/AsString : No Op registered for AsString with domain_version of 9
    I am mainly after input_name
    

    If that's a converter bug, how should I find the correct opset? (I have tried opset from 9 to 13, all throws error) then why that error not raised in standalone run?

    Any help please, Thanks

    opened by hanzigs 92
  • Tf2onnx has protobuf error

    Tf2onnx has protobuf error

    Describe the bug A clear and concise description of what the bug is.

    I am trying to use tf2onnx on MAC.

    I am using this command: python -m tf2onnx.convert --saved-model saved_model.pb --output model.onnx

    These are the errors that are produced: [libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/descriptor_database.cc:393] Invalid file descriptor data passed to EncodedDescriptorDatabase::Add(). [libprotobuf FATAL external/com_google_protobuf/src/google/protobuf/descriptor.cc:1367] CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size): libc++abi.dylib: terminating with uncaught exception of type google::protobuf::FatalException: CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size): Abort trap: 6

    Urgency If there are particular important use cases blocked by this or strict project-related timelines, please share more information and dates. If there are no hard deadlines, please specify none.

    System information

    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
    • Tensorflow Version: tensorflow 2
    • Python version: 3.8

    To Reproduce Describe steps/code to reproduce the behavior:

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots If applicable, add screenshots to help explain your problem.

    Additional context Add any other context about the problem here. If the issue is about a particular model, please share the model details as well to facilitate debugging.

    opened by quantum-fusion 58
  • 'Wrong wire type in tag' error when trying to convert .pb to ONNX

    'Wrong wire type in tag' error when trying to convert .pb to ONNX

    Describe the bug I'm using the following command to convert the frozen pb model to ONNX with no success

    python -m tf2onnx.convert --graphdef saved_model.pb --output frozen.onnx --fold_const --opset 10 --inputs image_tensor:0 --outputs num_detections:0,detection_boxes:0,detection_scores:0,detection_classes:0

    System information

    • Windows 10
    • Tensorflow Version: 2.3.0
    • Python version: 3.8

    Model attached

    saved_model.zip

    I do not have the chance to obtain the model in the saved model format. I created it using Google's AutoML for object detection, which lets me output the model in .tflite (including dict.txt and json metadata), .pb (alone), and tensorflow.js (3 bins, dict.txt and model.json) formats.

    Error

    2020-08-04 01:15:11.683354: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
    2020-08-04 01:15:14.659143: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found
    2020-08-04 01:15:14.663804: W tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit: UNKNOWN ERROR (303)
    2020-08-04 01:15:14.677143: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: DESKTOP-T1GI7LL
    2020-08-04 01:15:14.681924: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: DESKTOP-T1GI7LL
    2020-08-04 01:15:14.700290: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x229959341f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
    2020-08-04 01:15:14.707935: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
    Traceback (most recent call last):
      File "C:\Users\VIZCHINO PC 2.0\AppData\Local\Programs\Python\Python38\lib\runpy.py", line 194, in _run_module_as_main
        return _run_code(code, main_globals, None,
      File "C:\Users\VIZCHINO PC 2.0\AppData\Local\Programs\Python\Python38\lib\runpy.py", line 87, in _run_code
        exec(code, run_globals)
      File "C:\Users\VIZCHINO PC 2.0\AppData\Local\Programs\Python\Python38\lib\site-packages\tf2onnx\convert.py", line 171, in <module>
        main()
      File "C:\Users\VIZCHINO PC 2.0\AppData\Local\Programs\Python\Python38\lib\site-packages\tf2onnx\convert.py", line 125, in main
        graph_def, inputs, outputs = tf_loader.from_graphdef(args.graphdef, args.inputs, args.outputs)
      File "C:\Users\VIZCHINO PC 2.0\AppData\Local\Programs\Python\Python38\lib\site-packages\tf2onnx\tf_loader.py", line 147, in from_graphdef
        graph_def.ParseFromString(f.read())
      File "C:\Users\VIZCHINO PC 2.0\AppData\Local\Programs\Python\Python38\lib\site-packages\google\protobuf\message.py", line 199, in ParseFromString
        return self.MergeFromString(serialized)
      File "C:\Users\VIZCHINO PC 2.0\AppData\Local\Programs\Python\Python38\lib\site-packages\google\protobuf\internal\python_message.py", line 1134, in MergeFromString
        if self._InternalParse(serialized, 0, length) != length:
      File "C:\Users\VIZCHINO PC 2.0\AppData\Local\Programs\Python\Python38\lib\site-packages\google\protobuf\internal\python_message.py", line 1201, in InternalParse
        pos = field_decoder(buffer, new_pos, end, self, field_dict)
      File "C:\Users\VIZCHINO PC 2.0\AppData\Local\Programs\Python\Python38\lib\site-packages\google\protobuf\internal\decoder.py", line 738, in DecodeField
        if value._InternalParse(buffer, pos, new_pos) != new_pos:
      File "C:\Users\VIZCHINO PC 2.0\AppData\Local\Programs\Python\Python38\lib\site-packages\google\protobuf\internal\python_message.py", line 1201, in InternalParse
        pos = field_decoder(buffer, new_pos, end, self, field_dict)
      File "C:\Users\VIZCHINO PC 2.0\AppData\Local\Programs\Python\Python38\lib\site-packages\google\protobuf\internal\decoder.py", line 717, in DecodeRepeatedField
        if value.add()._InternalParse(buffer, pos, new_pos) != new_pos:
      File "C:\Users\VIZCHINO PC 2.0\AppData\Local\Programs\Python\Python38\lib\site-packages\google\protobuf\internal\python_message.py", line 1201, in InternalParse
        pos = field_decoder(buffer, new_pos, end, self, field_dict)
      File "C:\Users\VIZCHINO PC 2.0\AppData\Local\Programs\Python\Python38\lib\site-packages\google\protobuf\internal\decoder.py", line 872, in DecodeMap
        if submsg._InternalParse(buffer, pos, new_pos) != new_pos:
      File "C:\Users\VIZCHINO PC 2.0\AppData\Local\Programs\Python\Python38\lib\site-packages\google\protobuf\internal\python_message.py", line 1187, in InternalParse
        (data, new_pos) = decoder._DecodeUnknownField(
      File "C:\Users\VIZCHINO PC 2.0\AppData\Local\Programs\Python\Python38\lib\site-packages\google\protobuf\internal\decoder.py", line 973, in _DecodeUnknownField
        (data, pos) = _DecodeUnknownFieldSet(buffer, pos)
      File "C:\Users\VIZCHINO PC 2.0\AppData\Local\Programs\Python\Python38\lib\site-packages\google\protobuf\internal\decoder.py", line 952, in _DecodeUnknownFieldSet
        (data, pos) = _DecodeUnknownField(buffer, pos, wire_type)
      File "C:\Users\VIZCHINO PC 2.0\AppData\Local\Programs\Python\Python38\lib\site-packages\google\protobuf\internal\decoder.py", line 977, in _DecodeUnknownField
        raise _DecodeError('Wrong wire type in tag.')
    google.protobuf.message.DecodeError: Wrong wire type in tag.
    
    pending on user response 
    opened by dfvr1994 52
  • Problem converting TensorFlow2 model to Onnx

    Problem converting TensorFlow2 model to Onnx

    I tried converting a tf2 model (yolov3) to onnx, but failed. Below is the console log? Any ideas? Thanks!!

    Script: python -m tf2onnx.convert --saved-model /yolov3/1 --output tests/models/fc-layers/model.onnx --verbose

    Console: 2020-03-16 13:34:31.653818: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll Traceback (most recent call last): File "D:\software\conda_envs\yolov3-tf2-gpu\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "D:\software\conda_envs\yolov3-tf2-gpu\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "D:\work\cv\others\tensorflow-onnx\tf2onnx\convert.py", line 169, in main() File "D:\work\cv\others\tensorflow-onnx\tf2onnx\convert.py", line 130, in main args.saved_model, args.inputs, args.outputs, args.signature_def) File "D:\work\cv\others\tensorflow-onnx\tf2onnx\tf_loader.py", line 217, in from_saved_model _from_saved_model_v2(model_path, input_names, output_names, signatures) File "D:\work\cv\others\tensorflow-onnx\tf2onnx\tf_loader.py", line 192, in _from_saved_model_v2 imported = tf.saved_model.load(model_path) # pylint: disable=no-value-for-parameter File "D:\software\conda_envs\yolov3-tf2-gpu\lib\site-packages\tensorflow_core\python\saved_model\load.py", line 517, in load return load_internal(export_dir, tags) File "D:\software\conda_envs\yolov3-tf2-gpu\lib\site-packages\tensorflow_core\python\saved_model\load.py", line 526, in load_internal saved_model_proto = loader_impl.parse_saved_model(export_dir) File "D:\software\conda_envs\yolov3-tf2-gpu\lib\site-packages\tensorflow_core\python\saved_model\loader_impl.py", line 83, in parse_saved_model constants.SAVED_MODEL_FILENAME_PB)) OSError: SavedModel file does not exist at: /yolov3/1/saved_model.pb/{saved_model.pbtxt|saved_model.pb}

    (D:\software\conda_envs\yolov3-tf2-gpu) D:\work\cv\others\tensorflow-onnx>python -m tf2onnx.convert --saved-model /yolov3/1 --output tests/models/fc-layers/model.onnx --verbose 2020-03-16 13:34:45.532513: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll 2020-03-16 13:34:47.383460: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll 2020-03-16 13:34:47.482412: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.8095 pciBusID: 0000:01:00.0 2020-03-16 13:34:47.486208: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check. 2020-03-16 13:34:47.488384: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2020-03-16 13:34:47.490667: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2020-03-16 13:34:47.494297: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.8095 pciBusID: 0000:01:00.0 2020-03-16 13:34:47.496833: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check. 2020-03-16 13:34:47.499071: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2020-03-16 13:34:48.088700: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-03-16 13:34:48.091093: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 2020-03-16 13:34:48.092289: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N 2020-03-16 13:34:48.093799: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6351 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1) 2020-03-16 13:34:59.552310: I tensorflow/core/grappler/devices.cc:55] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1 2020-03-16 13:34:59.555678: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session 2020-03-16 13:34:59.564958: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.8095 pciBusID: 0000:01:00.0 2020-03-16 13:34:59.568464: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check. 2020-03-16 13:34:59.570953: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2020-03-16 13:34:59.572234: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-03-16 13:34:59.574515: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 2020-03-16 13:34:59.575663: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N 2020-03-16 13:34:59.577064: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6351 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1) 2020-03-16 13:34:59.736086: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:716] Optimization results for grappler item: graph_to_optimize 2020-03-16 13:34:59.738817: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718] function_optimizer: Graph size after: 2183 nodes (1811), 5175 edges (4800), time = 79.052ms. 2020-03-16 13:34:59.741706: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718] function_optimizer: function_optimizer did nothing. time = 1.372ms. 2020-03-16 13:35:05.101976: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.8095 pciBusID: 0000:01:00.0 2020-03-16 13:35:05.106896: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check. 2020-03-16 13:35:05.110194: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2020-03-16 13:35:05.112845: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-03-16 13:35:05.115719: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 2020-03-16 13:35:05.117089: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N 2020-03-16 13:35:05.119435: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6351 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1) WARNING:tensorflow:From D:\work\cv\others\tensorflow-onnx\tf2onnx\tf_loader.py:305: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version. Instructions for updating: Use tf.compat.v1.graph_util.extract_sub_graph 2020-03-16 13:35:05,807 - WARNING - tensorflow: From D:\work\cv\others\tensorflow-onnx\tf2onnx\tf_loader.py:305: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version. Instructions for updating: Use tf.compat.v1.graph_util.extract_sub_graph 2020-03-16 13:35:06.132003: I tensorflow/core/grappler/devices.cc:55] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1 2020-03-16 13:35:06.134612: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session 2020-03-16 13:35:06.136366: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.8095 pciBusID: 0000:01:00.0 2020-03-16 13:35:06.138836: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check. 2020-03-16 13:35:06.142093: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2020-03-16 13:35:06.143392: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-03-16 13:35:06.145154: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 2020-03-16 13:35:06.146189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N 2020-03-16 13:35:06.147499: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6351 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1) 2020-03-16 13:35:11.442378: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:716] Optimization results for grappler item: graph_to_optimize 2020-03-16 13:35:11.444175: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718] constant folding: Graph size after: 1378 nodes (-358), 3856 edges (-716), time = 3517.05ms. 2020-03-16 13:35:11.446392: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718] function_optimizer: function_optimizer did nothing. time = 19.883ms. 2020-03-16 13:35:11.447977: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718] constant folding: Graph size after: 1378 nodes (0), 3856 edges (0), time = 1067.97095ms. 2020-03-16 13:35:11.449778: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718] function_optimizer: function_optimizer did nothing. time = 28.142ms. 2020-03-16 13:35:12,329 - INFO - tf2onnx: inputs: dict_keys(['input:0']) 2020-03-16 13:35:12,330 - INFO - tf2onnx: outputs: dict_keys(['Identity:0', 'Identity_1:0', 'Identity_2:0', 'Identity_3:0']) 2020-03-16 13:35:13.110494: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.8095 pciBusID: 0000:01:00.0 2020-03-16 13:35:13.114622: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check. 2020-03-16 13:35:13.117316: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2020-03-16 13:35:13.118835: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-03-16 13:35:13.121375: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 2020-03-16 13:35:13.122781: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N 2020-03-16 13:35:13.124682: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6351 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1) 2020-03-16 13:35:13,127 - INFO - tf2onnx.tfonnx: Using tensorflow=2.0.1, onnx=1.6.0, tf2onnx=1.6.0/f48281 2020-03-16 13:35:13,127 - INFO - tf2onnx.tfonnx: Using opset <onnx, 8> 2020-03-16 13:35:13,149 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization/FusedBatchNormV3:5 2020-03-16 13:35:13,149 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_1/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_1/FusedBatchNormV3:5 2020-03-16 13:35:13,150 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_2/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_2/FusedBatchNormV3:5 2020-03-16 13:35:13,150 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_3/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_3/FusedBatchNormV3:5 2020-03-16 13:35:13,150 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_4/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_4/FusedBatchNormV3:5 2020-03-16 13:35:13,150 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_5/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_5/FusedBatchNormV3:5 2020-03-16 13:35:13,150 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_6/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_6/FusedBatchNormV3:5 2020-03-16 13:35:13,151 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_7/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_7/FusedBatchNormV3:5 2020-03-16 13:35:13,151 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_8/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_8/FusedBatchNormV3:5 2020-03-16 13:35:13,151 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_9/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_9/FusedBatchNormV3:5 2020-03-16 13:35:13,151 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_10/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_10/FusedBatchNormV3:5 2020-03-16 13:35:13,151 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_11/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_11/FusedBatchNormV3:5 2020-03-16 13:35:13,151 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_12/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_12/FusedBatchNormV3:5 2020-03-16 13:35:13,151 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_13/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_13/FusedBatchNormV3:5 2020-03-16 13:35:13,151 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_14/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_14/FusedBatchNormV3:5 2020-03-16 13:35:13,151 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_15/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_15/FusedBatchNormV3:5 2020-03-16 13:35:13,151 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_16/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_16/FusedBatchNormV3:5 2020-03-16 13:35:13,151 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_17/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_17/FusedBatchNormV3:5 2020-03-16 13:35:13,151 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_18/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_18/FusedBatchNormV3:5 2020-03-16 13:35:13,151 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_19/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_19/FusedBatchNormV3:5 2020-03-16 13:35:13,152 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_20/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_20/FusedBatchNormV3:5 2020-03-16 13:35:13,152 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_21/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_21/FusedBatchNormV3:5 2020-03-16 13:35:13,152 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_22/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_22/FusedBatchNormV3:5 2020-03-16 13:35:13,152 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_23/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_23/FusedBatchNormV3:5 2020-03-16 13:35:13,152 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_24/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_24/FusedBatchNormV3:5 2020-03-16 13:35:13,152 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_25/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_25/FusedBatchNormV3:5 2020-03-16 13:35:13,152 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_26/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_26/FusedBatchNormV3:5 2020-03-16 13:35:13,152 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_27/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_27/FusedBatchNormV3:5 2020-03-16 13:35:13,152 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_28/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_28/FusedBatchNormV3:5 2020-03-16 13:35:13,152 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_29/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_29/FusedBatchNormV3:5 2020-03-16 13:35:13,152 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_30/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_30/FusedBatchNormV3:5 2020-03-16 13:35:13,152 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_31/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_31/FusedBatchNormV3:5 2020-03-16 13:35:13,152 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_32/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_32/FusedBatchNormV3:5 2020-03-16 13:35:13,152 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_33/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_33/FusedBatchNormV3:5 2020-03-16 13:35:13,152 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_34/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_34/FusedBatchNormV3:5 2020-03-16 13:35:13,153 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_35/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_35/FusedBatchNormV3:5 2020-03-16 13:35:13,153 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_36/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_36/FusedBatchNormV3:5 2020-03-16 13:35:13,153 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_37/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_37/FusedBatchNormV3:5 2020-03-16 13:35:13,153 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_38/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_38/FusedBatchNormV3:5 2020-03-16 13:35:13,176 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_39/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_39/FusedBatchNormV3:5 2020-03-16 13:35:13,177 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_40/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_40/FusedBatchNormV3:5 2020-03-16 13:35:13,177 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_41/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_41/FusedBatchNormV3:5 2020-03-16 13:35:13,178 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_42/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_42/FusedBatchNormV3:5 2020-03-16 13:35:13,178 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_43/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_43/FusedBatchNormV3:5 2020-03-16 13:35:13,178 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_44/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_44/FusedBatchNormV3:5 2020-03-16 13:35:13,178 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_45/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_45/FusedBatchNormV3:5 2020-03-16 13:35:13,178 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_46/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_46/FusedBatchNormV3:5 2020-03-16 13:35:13,178 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_47/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_47/FusedBatchNormV3:5 2020-03-16 13:35:13,178 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_48/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_48/FusedBatchNormV3:5 2020-03-16 13:35:13,179 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_49/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_49/FusedBatchNormV3:5 2020-03-16 13:35:13,179 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_50/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_50/FusedBatchNormV3:5 2020-03-16 13:35:13,179 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_51/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_darknet/batch_normalization_51/FusedBatchNormV3:5 2020-03-16 13:35:13,179 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_conv_0/batch_normalization_52/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_conv_0/batch_normalization_52/FusedBatchNormV3:5 2020-03-16 13:35:13,179 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_conv_0/batch_normalization_53/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_conv_0/batch_normalization_53/FusedBatchNormV3:5 2020-03-16 13:35:13,179 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_conv_0/batch_normalization_54/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_conv_0/batch_normalization_54/FusedBatchNormV3:5 2020-03-16 13:35:13,179 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_conv_0/batch_normalization_55/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_conv_0/batch_normalization_55/FusedBatchNormV3:5 2020-03-16 13:35:13,179 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_conv_0/batch_normalization_56/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_conv_0/batch_normalization_56/FusedBatchNormV3:5 2020-03-16 13:35:13,179 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_output_0/batch_normalization_57/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_output_0/batch_normalization_57/FusedBatchNormV3:5 2020-03-16 13:35:13,179 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_conv_1/batch_normalization_58/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_conv_1/batch_normalization_58/FusedBatchNormV3:5 2020-03-16 13:35:13,179 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_conv_1/batch_normalization_59/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_conv_1/batch_normalization_59/FusedBatchNormV3:5 2020-03-16 13:35:13,179 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_conv_1/batch_normalization_60/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_conv_1/batch_normalization_60/FusedBatchNormV3:5 2020-03-16 13:35:13,179 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_conv_1/batch_normalization_61/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_conv_1/batch_normalization_61/FusedBatchNormV3:5 2020-03-16 13:35:13,180 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_conv_1/batch_normalization_62/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_conv_1/batch_normalization_62/FusedBatchNormV3:5 2020-03-16 13:35:13,180 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_conv_1/batch_normalization_63/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_conv_1/batch_normalization_63/FusedBatchNormV3:5 2020-03-16 13:35:13,180 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_output_1/batch_normalization_64/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_output_1/batch_normalization_64/FusedBatchNormV3:5 2020-03-16 13:35:13,180 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_conv_2/batch_normalization_65/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_conv_2/batch_normalization_65/FusedBatchNormV3:5 2020-03-16 13:35:13,180 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_conv_2/batch_normalization_66/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_conv_2/batch_normalization_66/FusedBatchNormV3:5 2020-03-16 13:35:13,180 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_conv_2/batch_normalization_67/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_conv_2/batch_normalization_67/FusedBatchNormV3:5 2020-03-16 13:35:13,180 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_conv_2/batch_normalization_68/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_conv_2/batch_normalization_68/FusedBatchNormV3:5 2020-03-16 13:35:13,180 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_conv_2/batch_normalization_69/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_conv_2/batch_normalization_69/FusedBatchNormV3:5 2020-03-16 13:35:13,180 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_conv_2/batch_normalization_70/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_conv_2/batch_normalization_70/FusedBatchNormV3:5 2020-03-16 13:35:13,180 - WARNING - tf2onnx.shape_inference: Cannot infer shape for StatefulPartitionedCall/yolov3/yolo_output_2/batch_normalization_71/FusedBatchNormV3: StatefulPartitionedCall/yolov3/yolo_output_2/batch_normalization_71/FusedBatchNormV3:5 2020-03-16 13:35:24,116 - VERBOSE - tf2onnx.tfonnx: Mapping TF node to ONNX node(s) 2020-03-16 13:35:26,084 - ERROR - tf2onnx.tfonnx: Failed to convert node StatefulPartitionedCall/yolov3/yolo_conv_1/up_sampling2d/resize/ResizeNearestNeighbor OP=Upsample Name=StatefulPartitionedCall/yolov3/yolo_conv_1/up_sampling2d/resize/ResizeNearestNeighbor Inputs: StatefulPartitionedCall/yolov3/yolo_conv_1/leaky_re_lu_58/LeakyRelu:0=LeakyRelu, [-1, -1, -1, 256], 1 StatefulPartitionedCall/yolov3/yolo_conv_1/up_sampling2d/mul:0=Mul, [2], 6 Outpus: StatefulPartitionedCall/yolov3/yolo_conv_1/up_sampling2d/resize/ResizeNearestNeighbor:0=[-1, -1, -1, 256], 1 Traceback (most recent call last): File "D:\work\cv\others\tensorflow-onnx\tf2onnx\tfonnx.py", line 266, in tensorflow_onnx_mapping func(g, node, **kwargs) File "D:\work\cv\others\tensorflow-onnx\tf2onnx\onnx_opset\nn.py", line 658, in version_7 target_shape = node.inputs[1].get_tensor_value() File "D:\work\cv\others\tensorflow-onnx\tf2onnx\graph.py", line 266, in get_tensor_value raise ValueError("get tensor value: {} must be Const".format(self.name)) ValueError: get tensor value: StatefulPartitionedCall/yolov3/yolo_conv_1/up_sampling2d/mul must be Const 2020-03-16 13:35:26,427 - ERROR - tf2onnx.tfonnx: Failed to convert node StatefulPartitionedCall/yolov3/yolo_conv_2/up_sampling2d_1/resize/ResizeNearestNeighbor OP=Upsample Name=StatefulPartitionedCall/yolov3/yolo_conv_2/up_sampling2d_1/resize/ResizeNearestNeighbor Inputs: StatefulPartitionedCall/yolov3/yolo_conv_2/leaky_re_lu_65/LeakyRelu:0=LeakyRelu, [-1, -1, -1, 128], 1 StatefulPartitionedCall/yolov3/yolo_conv_2/up_sampling2d_1/mul:0=Mul, [2], 6 Outpus: StatefulPartitionedCall/yolov3/yolo_conv_2/up_sampling2d_1/resize/ResizeNearestNeighbor:0=[-1, -1, -1, 128], 1 Traceback (most recent call last): File "D:\work\cv\others\tensorflow-onnx\tf2onnx\tfonnx.py", line 266, in tensorflow_onnx_mapping func(g, node, **kwargs) File "D:\work\cv\others\tensorflow-onnx\tf2onnx\onnx_opset\nn.py", line 658, in version_7 target_shape = node.inputs[1].get_tensor_value() File "D:\work\cv\others\tensorflow-onnx\tf2onnx\graph.py", line 266, in get_tensor_value raise ValueError("get tensor value: {} must be Const".format(self.name)) ValueError: get tensor value: StatefulPartitionedCall/yolov3/yolo_conv_2/up_sampling2d_1/mul must be Const 2020-03-16 13:35:26,724 - ERROR - tf2onnx.tfonnx: Tensorflow op [StatefulPartitionedCall/yolov3/yolo_nms/combined_non_max_suppression/CombinedNonMaxSuppression: CombinedNonMaxSuppression] is not supported 2020-03-16 13:35:26,725 - ERROR - tf2onnx.tfonnx: Unsupported ops: Counter({'CombinedNonMaxSuppression': 1}) Traceback (most recent call last): File "D:\software\conda_envs\yolov3-tf2-gpu\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "D:\software\conda_envs\yolov3-tf2-gpu\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "D:\work\cv\others\tensorflow-onnx\tf2onnx\convert.py", line 169, in main() File "D:\work\cv\others\tensorflow-onnx\tf2onnx\convert.py", line 153, in main inputs_as_nchw=args.inputs_as_nchw) File "D:\work\cv\others\tensorflow-onnx\tf2onnx\tfonnx.py", line 485, in process_tf_graph raise exceptions[0] File "D:\work\cv\others\tensorflow-onnx\tf2onnx\tfonnx.py", line 266, in tensorflow_onnx_mapping func(g, node, **kwargs) File "D:\work\cv\others\tensorflow-onnx\tf2onnx\onnx_opset\nn.py", line 658, in version_7 target_shape = node.inputs[1].get_tensor_value() File "D:\work\cv\others\tensorflow-onnx\tf2onnx\graph.py", line 266, in get_tensor_value raise ValueError("get tensor value: {} must be Const".format(self.name)) ValueError: get tensor value: StatefulPartitionedCall/yolov3/yolo_conv_1/up_sampling2d/mul must be Const

    pending on user response 
    opened by jackyvr 49
  • tags 'serve' could not be found

    tags 'serve' could not be found

    HELP: Error of MetaGraphDef RuntimeError: MetaGraphDef associated with tags 'serve' could not be found in SavedModel. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: saved_model_cli available_tags: []

    The '''serve''' and or other tag is seemed not in the pb file, what I made wrong ?

    System information

    • OS: Linux Ubuntu 18.04:
    • Tensorflow Version: 1.15
    • Python version: 3.7.6

    To Reproduce pb file is here (as_text=False); https://github.com/IAMAl/onnx

    tf2pb.py in same dir is conversion script.

    opened by IAMAl 45
  • Moving to opset 11 is causing issues

    Moving to opset 11 is causing issues

    Trying to build a model for opset 11. For version 1.5.1, I am getting an error regarding inferring shapes and dtypes. For version 1.5.6, the optimizer doesn't seem to be working and it failing because of deep_copy. Any help would be greatly appreciated.

    Trace version 1.5.1:

    2020-04-15 05:32:20,772 - WARNING - From C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\tf2onnx\verbose_logging.py:71: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

    2020-04-15 05:32:37,748 - INFO - Using tensorflow=1.14.0, onnx=1.6.0, tf2onnx=1.5.1/0c735a 2020-04-15 05:32:37,756 - INFO - Using opset <onnx, 11> 2020-04-15 05:32:42,706 - WARNING - ONNX Failed to infer shapes and dtypes for [Resize__219, type: Resize] Traceback (most recent call last): File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\tf2onnx\schemas.py", line 157, in infer_onnx_shape_dtype inferred_model = shape_inference.infer_shapes(model_proto) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\onnx\shape_inference.py", line 35, in infer_shapes inferred_model_str = C.infer_shapes(model_str) RuntimeError: input 2 is out of bounds 2020-04-15 05:32:42,789 - WARNING - ONNX Failed to infer shapes and dtypes for [Resize__242, type: Resize] Traceback (most recent call last): File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\tf2onnx\schemas.py", line 157, in infer_onnx_shape_dtype inferred_model = shape_inference.infer_shapes(model_proto) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\onnx\shape_inference.py", line 35, in infer_shapes inferred_model_str = C.infer_shapes(model_str) RuntimeError: input 2 is out of bounds 2020-04-15 05:32:42,810 - WARNING - ONNX Failed to infer shapes and dtypes for [Resize__247, type: Resize] Traceback (most recent call last): File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\tf2onnx\schemas.py", line 157, in infer_onnx_shape_dtype inferred_model = shape_inference.infer_shapes(model_proto) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\onnx\shape_inference.py", line 35, in infer_shapes inferred_model_str = C.infer_shapes(model_str) RuntimeError: input 2 is out of bounds 2020-04-15 05:32:42,867 - WARNING - ONNX Failed to infer shapes and dtypes for [Resize__264, type: Resize] Traceback (most recent call last): File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\tf2onnx\schemas.py", line 157, in infer_onnx_shape_dtype inferred_model = shape_inference.infer_shapes(model_proto) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\onnx\shape_inference.py", line 35, in infer_shapes inferred_model_str = C.infer_shapes(model_str) RuntimeError: input 2 is out of bounds 2020-04-15 05:32:42,873 - WARNING - ONNX Failed to infer shapes and dtypes for [Resize__269, type: Resize] Traceback (most recent call last): File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\tf2onnx\schemas.py", line 157, in infer_onnx_shape_dtype inferred_model = shape_inference.infer_shapes(model_proto) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\onnx\shape_inference.py", line 35, in infer_shapes inferred_model_str = C.infer_shapes(model_str) RuntimeError: input 2 is out of bounds 2020-04-15 05:32:42,913 - WARNING - ONNX Failed to infer shapes and dtypes for [Resize__280, type: Resize] Traceback (most recent call last): File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\tf2onnx\schemas.py", line 157, in infer_onnx_shape_dtype inferred_model = shape_inference.infer_shapes(model_proto) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\onnx\shape_inference.py", line 35, in infer_shapes inferred_model_str = C.infer_shapes(model_str) RuntimeError: input 2 is out of bounds 2020-04-15 05:32:42,924 - WARNING - ONNX Failed to infer shapes and dtypes for [Resize__285, type: Resize] Traceback (most recent call last): File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\tf2onnx\schemas.py", line 157, in infer_onnx_shape_dtype inferred_model = shape_inference.infer_shapes(model_proto) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\onnx\shape_inference.py", line 35, in infer_shapes inferred_model_str = C.infer_shapes(model_str) RuntimeError: input 2 is out of bounds 2020-04-15 05:32:43,345 - INFO - 2020-04-15 05:32:44,187 - WARNING - Failed to optimize model proto Traceback (most recent call last): File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\tf2onnx\graph.py", line 1167, in optimize_model_proto graph = GraphUtil.create_graph_from_onnx_model(onnx_model_proto) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\tf2onnx\graph.py", line 1206, in create_graph_from_onnx_model inferred_model = shape_inference.infer_shapes(onnx_model_proto) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\onnx\shape_inference.py", line 35, in infer_shapes inferred_model_str = C.infer_shapes(model_str) RuntimeError: input 1 is out of bounds 2020-04-15 05:32:44,218 - INFO - 2020-04-15 05:32:44,218 - INFO - Successfully converted TensorFlow model C:/Users/tmp/net.pb to ONNX 2020-04-15 05:32:45,539 - INFO - ONNX model is saved at C:/Users/tmp/net.onnx

    Trace version 1.5.6:

    2020-04-15 05:33:53,665 - WARNING - From C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\tf2onnx\verbose_logging.py:72: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

    2020-04-15 05:34:08,464 - INFO - Using tensorflow=1.14.0, onnx=1.6.0, tf2onnx=1.5.6/80edd7 2020-04-15 05:34:08,464 - INFO - Using opset <onnx, 11> 2020-04-15 05:34:11,773 - INFO - Optimizing ONNX model 2020-04-15 05:34:11,873 - WARNING - Failed to apply optimize_transpose Traceback (most recent call last): File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\tf2onnx\optimizer_init_.py", line 50, in optimize_graph current = copy.deepcopy(graph) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 180, in deepcopy y = _reconstruct(x, memo, *rv) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 280, in _reconstruct state = deepcopy(state, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 180, in deepcopy y = _reconstruct(x, memo, *rv) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 280, in _reconstruct state = deepcopy(state, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 220, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 220, in y = [deepcopy(a, memo) for a in x] File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 180, in deepcopy y = _reconstruct(x, memo, *rv) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 280, in _reconstruct state = deepcopy(state, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 159, in deepcopy copier = getattr(x, "deepcopy", None) ReferenceError: weakly-referenced object no longer exists 2020-04-15 05:34:12,103 - WARNING - Failed to apply fold_constants Traceback (most recent call last): File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\tf2onnx\optimizer_init.py", line 50, in optimize_graph current = copy.deepcopy(graph) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 180, in deepcopy y = _reconstruct(x, memo, *rv) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 280, in _reconstruct state = deepcopy(state, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 180, in deepcopy y = _reconstruct(x, memo, *rv) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 280, in _reconstruct state = deepcopy(state, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 220, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 220, in y = [deepcopy(a, memo) for a in x] File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 180, in deepcopy y = _reconstruct(x, memo, *rv) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 280, in _reconstruct state = deepcopy(state, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 159, in deepcopy copier = getattr(x, "deepcopy", None) ReferenceError: weakly-referenced object no longer exists 2020-04-15 05:34:12,215 - WARNING - Failed to apply loop_optimizer Traceback (most recent call last): File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\tf2onnx\optimizer_init.py", line 50, in optimize_graph current = copy.deepcopy(graph) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 180, in deepcopy y = _reconstruct(x, memo, *rv) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 280, in _reconstruct state = deepcopy(state, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 180, in deepcopy y = _reconstruct(x, memo, *rv) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 280, in _reconstruct state = deepcopy(state, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 220, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 220, in y = [deepcopy(a, memo) for a in x] File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 180, in deepcopy y = _reconstruct(x, memo, *rv) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 280, in _reconstruct state = deepcopy(state, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 159, in deepcopy copier = getattr(x, "deepcopy", None) ReferenceError: weakly-referenced object no longer exists 2020-04-15 05:34:12,323 - WARNING - Failed to apply merge_duplication Traceback (most recent call last): File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\tf2onnx\optimizer_init.py", line 50, in optimize_graph current = copy.deepcopy(graph) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 180, in deepcopy y = _reconstruct(x, memo, *rv) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 280, in _reconstruct state = deepcopy(state, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 180, in deepcopy y = _reconstruct(x, memo, *rv) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 280, in _reconstruct state = deepcopy(state, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 220, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 220, in y = [deepcopy(a, memo) for a in x] File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 180, in deepcopy y = _reconstruct(x, memo, *rv) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 280, in _reconstruct state = deepcopy(state, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 159, in deepcopy copier = getattr(x, "deepcopy", None) ReferenceError: weakly-referenced object no longer exists 2020-04-15 05:34:12,540 - WARNING - Failed to apply remove_identity Traceback (most recent call last): File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\tf2onnx\optimizer_init.py", line 50, in optimize_graph current = copy.deepcopy(graph) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 180, in deepcopy y = _reconstruct(x, memo, *rv) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 280, in _reconstruct state = deepcopy(state, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 180, in deepcopy y = _reconstruct(x, memo, *rv) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 280, in _reconstruct state = deepcopy(state, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 220, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 220, in y = [deepcopy(a, memo) for a in x] File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 180, in deepcopy y = _reconstruct(x, memo, *rv) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 280, in _reconstruct state = deepcopy(state, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 159, in deepcopy copier = getattr(x, "deepcopy", None) ReferenceError: weakly-referenced object no longer exists 2020-04-15 05:34:12,649 - WARNING - Failed to apply remove_back_to_back Traceback (most recent call last): File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\site-packages\tf2onnx\optimizer_init.py", line 50, in optimize_graph current = copy.deepcopy(graph) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 180, in deepcopy y = _reconstruct(x, memo, *rv) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 280, in _reconstruct state = deepcopy(state, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 180, in deepcopy y = _reconstruct(x, memo, *rv) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 280, in _reconstruct state = deepcopy(state, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 220, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 220, in y = [deepcopy(a, memo) for a in x] File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 180, in deepcopy y = _reconstruct(x, memo, *rv) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 280, in _reconstruct state = deepcopy(state, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 150, in deepcopy y = copier(x, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "C:\Users\tmp.conda\envs\tensorflow_gpu2\lib\copy.py", line 159, in deepcopy copier = getattr(x, "deepcopy", None) ReferenceError: weakly-referenced object no longer exists 2020-04-15 05:34:12,689 - INFO - After optimization: no change 2020-04-15 05:34:12,826 - INFO - 2020-04-15 05:34:12,827 - INFO - Successfully converted TensorFlow model C:/Users/tmp/net.pb to ONNX 2020-04-15 05:34:14,159 - INFO - ONNX model is saved at C:/Users/tmp/net.onnx

    opened by ttdd11 36
  • import frozen graph with error

    import frozen graph with error "Input 0 of node X was passed float from Y:0 incompatible with expected float_ref."

    Note: create this issue for anybody who might come across the similar issue in the future.

    when I tried to convert frozen DCGAN inference model (trained with https://github.com/carpedm20/DCGAN-tensorflow), the error was thrown as below:

    ------------------- start handling dcgan.pb ----------------------------
    change working directory to /home/pengwang/community/tensorflow
    ------ summarize the frozen graph, to get the inputs and outputs name
    bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=/tmp/frozen/dcgan.pb
    Found 2 possible inputs: (name=y, type=float(1), shape=[64,10]) (name=z, type=float(1), shape=[?,100])
    No variables spotted.
    Found 1 possible outputs: (name=generator/Sigmoid, op=Sigmoid)
    Found 7080115 (7.08M) const parameters, 0 (0) variable parameters, and 6 control_edges
    Op types used: 50 Const, 23 Identity, 8 Mul, 8 Reshape, 6 AssignSub, 6 Sub, 4 ConcatV2, 3 FusedBatchNorm, 3 Relu, 2 Add, 2 BiasAdd, 2 Conv2DBackpropInput, 2 Fill, 2 MatMul, 2 Placeholder, 1 Sigmoid
    To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
    bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/tmp/frozen/dcgan.pb --show_flops --input_layer=y,z --input_layer_type=float,float --input_layer_shape=64,10:-1,100 --output_layer=generator/Sigmoid
    ------ update the inputs and outputs name to format like input_name:index
    python3 /home/pengwang/community/learning/onnx/update_name_with_index.py y,z
    updated input names is y:0,z:1, output names is generator/Sigmoid:0
    ------ start convertion, tensorflow usage require caller program must not in tensorflow root folder, so switch to current user directory with cd
    using tensorflow=1.9.0-rc0, onnx=1.2.1
    2018-07-17 16:10:59.166646: I tensorflow/tools/graph_transforms/transform_graph.cc:318] Applying fold_batch_norms
    2018-07-17 16:10:59.194705: I tensorflow/tools/graph_transforms/transform_graph.cc:318] Applying fold_old_batch_norms
    Traceback (most recent call last):
     File "/home/pengwang/community/tensorflow/_python_build/tensorflow/python/framework/importer.py", line 418, in import_graph_def
       graph._c_graph, serialized, options)  # pylint: disable=protected-access
    tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 0 of node generator/g_bn0/AssignMovingAvg was passed float from generator/g_bn0/moving_mean:0 incompatible with expected float_ref.
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
     File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
       "__main__", mod_spec)
     File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
       exec(code, run_globals)
     File "/home/pengwang/community/tensorflow-onnx/tf2onnx/convert.py", line 100, in <module>
       main()
     File "/home/pengwang/community/tensorflow-onnx/tf2onnx/convert.py", line 80, in main
       tf.import_graph_def(graph_def, name='')
     File "/home/pengwang/community/tensorflow/_python_build/tensorflow/python/util/deprecation.py", line 432, in new_func
       return func(*args, **kwargs)
     File "/home/pengwang/community/tensorflow/_python_build/tensorflow/python/framework/importer.py", line 422, **in import_graph_def
       raise ValueError(str(e))
    ValueError: Input 0 of node generator/g_bn0/AssignMovingAvg was passed float from generator/g_bn0/moving_mean:0 incompatible with expected float_ref.**
    

    This is actually caused by the node AssignSub's first input is expected to be a float_ref, but actually after freeze_graph.py handling, it is a float. There is a discussion at https://github.com/davidsandberg/facenet/issues/161 and https://www.bountysource.com/issues/36614355-unable-to-import-frozen-graph-with-batchnorm.

    To get this fixed, we need to do extra work for the frozen graph, basically, at least we change the AssignSub to Sub in the graph. look at below code as an example:

    import tensorflow as tf
    
    from tensorflow.python.platform import gfile
    model_path="/tmp/frozen/dcgan.pb"
    
    # read graph definition
    f = gfile.FastGFile(model_path, "rb")
    gd = graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())
    
    # fix nodes
    for node in graph_def.node:
        if node.op == 'RefSwitch':
            node.op = 'Switch'
            for index in xrange(len(node.input)):
                if 'moving_' in node.input[index]:
                    node.input[index] = node.input[index] + '/read'
        elif node.op == 'AssignSub':
            node.op = 'Sub'
            if 'use_locking' in node.attr: del node.attr['use_locking']
    
    # import graph into session
    tf.import_graph_def(graph_def, name='')
    tf.train.write_graph(graph_def, './', 'good_frozen.pb', as_text=False)
    tf.train.write_graph(graph_def, './', 'good_frozen.pbtxt', as_text=True)
    
    opened by pengwa 34
  • cannot convert tf savedmodel to onnx

    cannot convert tf savedmodel to onnx

    System information OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04 TensorFlow installed from (source or binary): binary TensorFlow version (use command below): tf-nightly-gpu 2.5.0.dev20210119 Python version: 3.6 (Anaconda) Tensorflow-onnx version: 1.8.0. build from source

    my command line :

    python -m tf2onnx.convert --saved-model ./model.savedmodel --output fea.onnx --custom-ops Bucketize,AsString,StringToHashBucketFast --signature_def serving_default --tag serve --opset 12 
    

    But I got the following error:

    ......
    2021-01-21 11:29:41,413 - ERROR - Could not find table resource to replace placeholder unknown_172
    2021-01-21 11:29:41,415 - ERROR - Could not find table resource to replace placeholder unknown_174
    2021-01-21 11:29:41,416 - ERROR - Could not find table resource to replace placeholder unknown_176
    2021-01-21 11:29:41,417 - ERROR - Could not find table resource to replace placeholder unknown_178
    2021-01-21 11:29:41,418 - ERROR - Could not find table resource to replace placeholder unknown_180
    2021-01-21 11:29:41,418 - ERROR - Could not find table resource to replace placeholder unknown_183
    2021-01-21 11:29:41,418 - ERROR - Could not find table resource to replace placeholder unknown_185
    2021-01-21 11:29:41,418 - ERROR - Could not find table resource to replace placeholder unknown_187
    2021-01-21 11:29:41,418 - ERROR - Could not find table resource to replace placeholder unknown_189
    2021-01-21 11:29:41,418 - ERROR - Could not find table resource to replace placeholder unknown_193
    2021-01-21 11:29:41,418 - ERROR - Could not find table resource to replace placeholder unknown_195
    2021-01-21 11:29:41,419 - ERROR - Could not find table resource to replace placeholder unknown_197
    ......
    tensorflow.python.framework.errors_impl.InvalidArgumentError: 'func' argument to TF_GraphCopyFunction cannot be null
    Exception ignored in: <bound method CapturableResourceDeleter.__del__ of <tensorflow.python.training.tracking.tracking.CapturableResourceDeleter object at 0x7f70486cbcf8>>
    Traceback (most recent call last):
      File "/usr/local/anaconda3/envs/tf2.2-n/lib/python3.6/site-packages/tensorflow/python/training/tracking/tracking.py", line 208, in __del__
        self._destroy_resource()
      File "/usr/local/anaconda3/envs/tf2.2-n/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 797, in __call__
        result = self._call(*args, **kwds)
      File "/usr/local/anaconda3/envs/tf2.2-n/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 841, in _call
        self._initialize(args, kwds, add_initializers_to=initializers)
      File "/usr/local/anaconda3/envs/tf2.2-n/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 695, in _initialize
        *args, **kwds))
      File "/usr/local/anaconda3/envs/tf2.2-n/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2981, in _get_concrete_function_internal_garbage_collected
        graph_function, _ = self._maybe_define_function(args, kwargs)
      File "/usr/local/anaconda3/envs/tf2.2-n/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3373, in _maybe_define_function
        graph_function = self._create_graph_function(args, kwargs)
      File "/usr/local/anaconda3/envs/tf2.2-n/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3218, in _create_graph_function
        capture_by_value=self._capture_by_value),
      File "/usr/local/anaconda3/envs/tf2.2-n/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 998, in func_graph_from_py_func
        func_outputs = python_func(*func_args, **func_kwargs)
      File "/usr/local/anaconda3/envs/tf2.2-n/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 603, in wrapped_fn
        out = weak_wrapped_fn().__wrapped__(*args, **kwds)
      File "/usr/local/anaconda3/envs/tf2.2-n/lib/python3.6/site-packages/tensorflow/python/saved_model/function_deserialization.py", line 257, in restored_function_body
        return _call_concrete_function(function, inputs)
      File "/usr/local/anaconda3/envs/tf2.2-n/lib/python3.6/site-packages/tensorflow/python/saved_model/function_deserialization.py", line 75, in _call_concrete_function
        result = function._call_flat(tensor_inputs, function._captured_inputs)  # pylint: disable=protected-access
      File "/usr/local/anaconda3/envs/tf2.2-n/lib/python3.6/site-packages/tensorflow/python/saved_model/load.py", line 116, in _call_flat
        cancellation_manager)
      File "/usr/local/anaconda3/envs/tf2.2-n/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1944, in _call_flat
        flat_outputs = forward_function.call(ctx, args_with_tangents)
      File "/usr/local/anaconda3/envs/tf2.2-n/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 590, in call
        executor_type=executor_type)
      File "/usr/local/anaconda3/envs/tf2.2-n/lib/python3.6/site-packages/tensorflow/python/ops/functional_ops.py", line 1206, in partitioned_call
        f.add_to_graph(graph)
      File "/usr/local/anaconda3/envs/tf2.2-n/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 506, in add_to_graph
        g._add_function(self)
      File "/usr/local/anaconda3/envs/tf2.2-n/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3403, in _add_function
        gradient)
    
    

    I want to get the ONNX model, desperate for some advice!

    thank you very much

    pending on user response 
    opened by zhaohb 32
  • Failed to apply optimize_transpose - converting faster_rcnn_inception_v2 to ONNX

    Failed to apply optimize_transpose - converting faster_rcnn_inception_v2 to ONNX

    Describe the bug The process crashed even though the script ends stating the model was successfully converted and saved as ONNX

    Urgency End of January

    System information

    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04
    • Tensorflow Version: 1.15.2
    • Python version: 3.6
    • Onnx=1.7.0
    • tf2onnx=1.9.0/72fb20
    • INFO - Using tensorflow=1.15.2, onnx=1.7.0, tf2onnx=1.9.0/72fb20

    To Reproduce Model: faster_rcnn_inception_v2_coco_2018_01_28 downloaded from TensorFlow 1 Detection Model Zoo

    Script used:

    python3 -m tf2onnx.convert --graphdef faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb \
          --output model.onnx \
          --inputs image_tensor:0 \
          --outputs num_detections:0,detection_boxes:0,detection_scores:0,detection_classes:0 \
          --tag server \ 
          --fold_const \
         --opset 12
    

    Expected behavior graphdef converted to onnx but got a bunch on warnings and errors:

    2021-01-19 01:42:14.297029: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2
    WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tf2onnx/verbose_logging.py:76: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.
    
    2021-01-19 01:42:15,433 - WARNING - From /usr/local/lib/python3.6/dist-packages/tf2onnx/verbose_logging.py:76: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.
    
    2021-01-19 01:42:15.434277: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
    2021-01-19 01:42:15.469844: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:03:00.0
    2021-01-19 01:42:15.470901: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 1 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:41:00.0
    2021-01-19 01:42:15.471953: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 2 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:85:00.0
    2021-01-19 01:42:15.473003: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 3 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:c4:00.0
    2021-01-19 01:42:15.473020: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2
    2021-01-19 01:42:15.474194: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
    2021-01-19 01:42:15.475527: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
    2021-01-19 01:42:15.475704: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
    2021-01-19 01:42:15.476892: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
    2021-01-19 01:42:15.477570: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
    2021-01-19 01:42:15.480103: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
    2021-01-19 01:42:15.488505: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0, 1, 2, 3
    2021-01-19 01:42:15.516496: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1996110000 Hz
    2021-01-19 01:42:15.539353: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4e9e800 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
    2021-01-19 01:42:15.539400: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
    2021-01-19 01:42:15.866169: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4755230 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
    2021-01-19 01:42:15.866219: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Tesla T4, Compute Capability 7.5
    2021-01-19 01:42:15.866230: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (1): Tesla T4, Compute Capability 7.5
    2021-01-19 01:42:15.866238: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (2): Tesla T4, Compute Capability 7.5
    2021-01-19 01:42:15.866247: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (3): Tesla T4, Compute Capability 7.5
    2021-01-19 01:42:15.877192: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:03:00.0
    2021-01-19 01:42:15.878243: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 1 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:41:00.0
    2021-01-19 01:42:15.879228: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 2 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:85:00.0
    2021-01-19 01:42:15.880210: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 3 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:c4:00.0
    2021-01-19 01:42:15.880238: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2
    2021-01-19 01:42:15.880260: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
    2021-01-19 01:42:15.880271: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
    2021-01-19 01:42:15.880305: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
    2021-01-19 01:42:15.880316: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
    2021-01-19 01:42:15.880327: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
    2021-01-19 01:42:15.880338: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
    2021-01-19 01:42:15.888019: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0, 1, 2, 3
    2021-01-19 01:42:15.888052: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2
    2021-01-19 01:42:16.800602: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:
    2021-01-19 01:42:16.800649: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186]      0 1 2 3
    2021-01-19 01:42:16.800659: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0:   N Y Y Y
    2021-01-19 01:42:16.800665: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 1:   Y N Y Y
    2021-01-19 01:42:16.800672: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 2:   Y Y N Y
    2021-01-19 01:42:16.800691: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 3:   Y Y Y N
    2021-01-19 01:42:16.806318: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14968 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:03:00.0, compute capability: 7.5)
    2021-01-19 01:42:16.808145: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 14968 MB memory) -> physical GPU (device: 1, name: Tesla T4, pci bus id: 0000:41:00.0, compute capability: 7.5)
    2021-01-19 01:42:16.809607: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 14968 MB memory) -> physical GPU (device: 2, name: Tesla T4, pci bus id: 0000:85:00.0, compute capability: 7.5)
    2021-01-19 01:42:16.811190: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 14968 MB memory) -> physical GPU (device: 3, name: Tesla T4, pci bus id: 0000:c4:00.0, compute capability: 7.5)
    2021-01-19 01:42:19.030786: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:03:00.0
    2021-01-19 01:42:19.031797: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 1 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:41:00.0
    2021-01-19 01:42:19.032783: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 2 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:85:00.0
    2021-01-19 01:42:19.033772: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 3 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:c4:00.0
    2021-01-19 01:42:19.033816: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2
    2021-01-19 01:42:19.033837: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
    2021-01-19 01:42:19.033851: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
    2021-01-19 01:42:19.033878: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
    2021-01-19 01:42:19.033888: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
    2021-01-19 01:42:19.033897: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
    2021-01-19 01:42:19.033907: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
    2021-01-19 01:42:19.041620: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0, 1, 2, 3
    2021-01-19 01:42:19.041682: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:
    2021-01-19 01:42:19.041691: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186]      0 1 2 3
    2021-01-19 01:42:19.041697: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0:   N Y Y Y
    2021-01-19 01:42:19.041701: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 1:   Y N Y Y
    2021-01-19 01:42:19.041704: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 2:   Y Y N Y
    2021-01-19 01:42:19.041712: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 3:   Y Y Y N
    2021-01-19 01:42:19.046639: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14968 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:03:00.0, compute capability: 7.5)
    2021-01-19 01:42:19.047637: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 14968 MB memory) -> physical GPU (device: 1, name: Tesla T4, pci bus id: 0000:41:00.0, compute capability: 7.5)
    2021-01-19 01:42:19.048631: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 14968 MB memory) -> physical GPU (device: 2, name: Tesla T4, pci bus id: 0000:85:00.0, compute capability: 7.5)
    2021-01-19 01:42:19.049629: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 14968 MB memory) -> physical GPU (device: 3, name: Tesla T4, pci bus id: 0000:c4:00.0, compute capability: 7.5)
    2021-01-19 01:42:19.420452: I tensorflow/core/grappler/devices.cc:55] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 4
    2021-01-19 01:42:19.420751: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
    2021-01-19 01:42:19.423098: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:03:00.0
    2021-01-19 01:42:19.424086: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 1 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:41:00.0
    2021-01-19 01:42:19.425078: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 2 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:85:00.0
    2021-01-19 01:42:19.426065: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 3 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:c4:00.0
    2021-01-19 01:42:19.426093: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2
    2021-01-19 01:42:19.426112: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
    2021-01-19 01:42:19.426122: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
    2021-01-19 01:42:19.426157: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
    2021-01-19 01:42:19.426167: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
    2021-01-19 01:42:19.426176: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
    2021-01-19 01:42:19.426185: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
    2021-01-19 01:42:19.433835: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0, 1, 2, 3
    2021-01-19 01:42:19.433884: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:
    2021-01-19 01:42:19.433891: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186]      0 1 2 3
    2021-01-19 01:42:19.433897: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0:   N Y Y Y
    2021-01-19 01:42:19.433903: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 1:   Y N Y Y
    2021-01-19 01:42:19.433910: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 2:   Y Y N Y
    2021-01-19 01:42:19.433915: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 3:   Y Y Y N
    2021-01-19 01:42:19.438834: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14968 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:03:00.0, compute capability: 7.5)
    2021-01-19 01:42:19.439833: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 14968 MB memory) -> physical GPU (device: 1, name: Tesla T4, pci bus id: 0000:41:00.0, compute capability: 7.5)
    2021-01-19 01:42:19.440826: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 14968 MB memory) -> physical GPU (device: 2, name: Tesla T4, pci bus id: 0000:85:00.0, compute capability: 7.5)
    2021-01-19 01:42:19.441813: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 14968 MB memory) -> physical GPU (device: 3, name: Tesla T4, pci bus id: 0000:c4:00.0, compute capability: 7.5)
    2021-01-19 01:42:20.879654: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:839] Optimization results for grappler item: graph_to_optimize
    2021-01-19 01:42:20.879705: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:841]   constant_folding: Graph size after: 10994 nodes (0), 19062 edges (1), time = 751.645ms.
    2021-01-19 01:42:20.879712: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:841]   function_optimizer: function_optimizer did nothing. time = 9.376ms.
    2021-01-19 01:42:20.879718: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:841]   constant_folding: Graph size after: 10994 nodes (0), 19062 edges (0), time = 281.425ms.
    2021-01-19 01:42:20.879724: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:841]   function_optimizer: function_optimizer did nothing. time = 12.249ms.
    2021-01-19 01:42:22.250120: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:03:00.0
    2021-01-19 01:42:22.251145: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 1 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:41:00.0
    2021-01-19 01:42:22.252133: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 2 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:85:00.0
    2021-01-19 01:42:22.253120: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 3 with properties:
    name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
    pciBusID: 0000:c4:00.0
    2021-01-19 01:42:22.253147: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2
    2021-01-19 01:42:22.253166: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
    2021-01-19 01:42:22.253176: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
    2021-01-19 01:42:22.253186: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
    2021-01-19 01:42:22.253195: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
    2021-01-19 01:42:22.253206: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
    2021-01-19 01:42:22.253214: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
    2021-01-19 01:42:22.260867: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0, 1, 2, 3
    2021-01-19 01:42:22.260925: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:
    2021-01-19 01:42:22.260933: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186]      0 1 2 3
    2021-01-19 01:42:22.260939: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0:   N Y Y Y
    2021-01-19 01:42:22.260944: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 1:   Y N Y Y
    2021-01-19 01:42:22.260949: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 2:   Y Y N Y
    2021-01-19 01:42:22.260956: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 3:   Y Y Y N
    2021-01-19 01:42:22.265876: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14968 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:03:00.0, compute capability: 7.5)
    2021-01-19 01:42:22.266864: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 14968 MB memory) -> physical GPU (device: 1, name: Tesla T4, pci bus id: 0000:41:00.0, compute capability: 7.5)
    2021-01-19 01:42:22.267845: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 14968 MB memory) -> physical GPU (device: 2, name: Tesla T4, pci bus id: 0000:85:00.0, compute capability: 7.5)
    2021-01-19 01:42:22.268830: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 14968 MB memory) -> physical GPU (device: 3, name: Tesla T4, pci bus id: 0000:c4:00.0, compute capability: 7.5)
    2021-01-19 01:42:22,269 - INFO - Using tensorflow=1.15.2, onnx=1.7.0, tf2onnx=1.9.0/72fb20
    2021-01-19 01:42:22,269 - INFO - Using opset <onnx, 12>
    2021-01-19 01:42:29,502 - WARNING - Cannot infer shape for BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/zeros: BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/zeros:0
    2021-01-19 01:42:29,502 - WARNING - Cannot infer shape for SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/zeros: SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/zeros:0
    2021-01-19 01:42:30,397 - INFO - Computed 1 values for constant folding
    2021-01-19 01:42:34,002 - INFO - folding node using tf type=StridedSlice, name=Preprocessor/map/while/ResizeToRange/strided_slice_2
    2021-01-19 01:42:45,401 - INFO - Optimizing ONNX model
    2021-01-19 01:42:48,306 - WARNING - Failed to apply optimize_transpose
    Traceback (most recent call last):
      File "/usr/local/lib/python3.6/dist-packages/tf2onnx/optimizer/__init__.py", line 55, in optimize_graph
        graph = opt.optimize(current) or graph
      File "/usr/local/lib/python3.6/dist-packages/tf2onnx/optimizer/optimizer_base.py", line 41, in optimize
        graph = self._optimize(graph)
      File "/usr/local/lib/python3.6/dist-packages/tf2onnx/optimizer/transpose_optimizer.py", line 139, in _optimize
        return self._apply_optimization(graph, self._optimize_at_current_graph_level)
      File "/usr/local/lib/python3.6/dist-packages/tf2onnx/optimizer/optimizer_base.py", line 62, in _apply_optimization
        graph = optimize_func(graph)
      File "/usr/local/lib/python3.6/dist-packages/tf2onnx/optimizer/transpose_optimizer.py", line 172, in _optimize_at_current_graph_level
        self.post_optimize_action()
      File "/usr/local/lib/python3.6/dist-packages/tf2onnx/optimizer/transpose_optimizer.py", line 115, in post_optimize_action
        self._g.topological_sort(self._g.get_nodes())
      File "/usr/local/lib/python3.6/dist-packages/tf2onnx/graph.py", line 1009, in topological_sort
        utils.make_sure(j is not None, "Cannot find node with output %r", inp)
      File "/usr/local/lib/python3.6/dist-packages/tf2onnx/utils.py", line 218, in make_sure
        raise ValueError("make_sure failure: " + error_msg % args)
    ValueError: make_sure failure: Cannot find node with output 'Transpose__9517:0'
    2021-01-19 01:44:14,190 - WARNING - Failed to apply optimize_transpose
    Traceback (most recent call last):
      File "/usr/local/lib/python3.6/dist-packages/tf2onnx/optimizer/__init__.py", line 55, in optimize_graph
        graph = opt.optimize(current) or graph
      File "/usr/local/lib/python3.6/dist-packages/tf2onnx/optimizer/optimizer_base.py", line 42, in optimize
        graph.update_proto()
      File "/usr/local/lib/python3.6/dist-packages/tf2onnx/graph.py", line 814, in update_proto
        node.update_proto(external_tensor_storage)
      File "/usr/local/lib/python3.6/dist-packages/tf2onnx/graph.py", line 388, in update_proto
        external_tensor_storage=external_tensor_storage)
      File "/usr/local/lib/python3.6/dist-packages/tf2onnx/graph.py", line 1049, in make_graph
        self.topological_sort(self.get_nodes())
      File "/usr/local/lib/python3.6/dist-packages/tf2onnx/graph.py", line 1009, in topological_sort
        utils.make_sure(j is not None, "Cannot find node with output %r", inp)
      File "/usr/local/lib/python3.6/dist-packages/tf2onnx/utils.py", line 218, in make_sure
        raise ValueError("make_sure failure: " + error_msg % args)
    ValueError: make_sure failure: Cannot find node with output 'Transpose__10314:0'
    2021-01-19 01:44:29,473 - INFO - After optimization: Cast -318 (1149->831), Const -3850 (7696->3846), Identity -84 (87->3), Mul -9 (693->684), Shape -7 (133->126), Slice -4 (1066->1062), Squeeze -379 (1537->1158), Transpose -53 (365->312), Unsqueeze -492 (819->327)
    2021-01-19 01:44:30,978 - INFO -
    2021-01-19 01:44:30,978 - INFO - Successfully converted TensorFlow model /workspace/triton_blog/faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb to ONNX
    2021-01-19 01:44:31,075 - INFO - ONNX model is saved at /worspace/triton_blog/model.onnx
    

    Screenshots If applicable, add screenshots to help explain your problem. image

    Additional context I have tried with opset numbers from 9-12 and with onnx=1.8 without success. Also I have used the saved onnx model anyway to run with tensorrt but got another error: Unsupported ONNX data type: UINT8 (2) reported in the issue https://github.com/NVIDIA/TensorRT/issues/1022

    opened by vilmara 31
  • Please verify ONNX v1.10.0 Release Candidate

    Please verify ONNX v1.10.0 Release Candidate

    Hello ONNX partner, We have published (to TestPyPI) a Release Candidate package for the upcoming ONNX v1.10.0 release. Please help validate it and let us know if you encounter any issues. Note that the tentative release date for ONNX v1.10.0 is July 31, 2021. Thank you for your assistance!

    pip install numpy protobuf typing-extensions>=3.6.2.1
    pip install -i https://test.pypi.org/simple/ onnx==1.9.101
    

    cc @guschmue @TomWildenhain-Microsoft

    opened by rajeevsrao 28
  • Unsupported op ReverseV2

    Unsupported op ReverseV2

    Converting a graph to onnx results in

    Traceback (most recent call last):
      File "/usr/lib64/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/usr/lib64/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home/fuzzybatman/.local/lib/python3.7/site-packages/tf2onnx/convert.py", line 145, in <module>
        main()
      File "/home/fuzzybatman/.local/lib/python3.7/site-packages/tf2onnx/convert.py", line 127, in main
        inputs_as_nchw=args.inputs_as_nchw)
      File "/home/fuzzybatman/.local/lib/python3.7/site-packages/tf2onnx/tfonnx.py", line 786, in process_tf_graph
        mapped_op, unmapped_op = tensorflow_onnx_mapping(g, continue_on_error, ops_mapping)
      File "/home/fuzzybatman/.local/lib/python3.7/site-packages/tf2onnx/tfonnx.py", line 552, in tensorflow_onnx_mapping
        raise ValueError("tensorflow op " + op + " is not supported")
    ValueError: tensorflow op ReverseV2 is not supported
    
    enhancement contribution welcome 
    opened by fuzzyBatman 28
  • The output of the model before and after the conversion is inconsistent.

    The output of the model before and after the conversion is inconsistent.

    Describe the bug

    I wrapped the single operator AdjustContrastv2 in tensorflow 2.11 as a model and saved it as a frozen pb model file. At the same time I transformed the tf model by tf2onnx to get the onnx model, and the two have inconsistent results with large errors for the same input parameters.

    Urgency

    System information

    • OS Platform and Distribution (e.g., Linux Ubuntu 18.04*): Ubuntu 20.04.5 LTS
    • TensorFlow Version: 2.11.0.dev
    • Python version: 3.8.10
    • ONNX version (if applicable, e.g. 1.11*): 1.12.0
    • ONNXRuntime version (if applicable, e.g. 1.11*): 1.12.1

    To Reproduce

    The link shows the two models before(tf.raw_ops.AdjustContrastv2_frozen_graph.pb) and after(tf.raw_ops.AdjustContrastv2_model.onnx) the conversion, and the two input parameters of the model(images.npy and contrast_factor.npy). The tf_save_model dir is the model saved using tf.saved_model.save.

    The following code is running two models separately, feeding them the same input, but with inconsistent results.

    import numpy as np
    import tensorflow as tf
    import onnxruntime as rt
    
    images = np.load("images.npy")
    contrast_factor = np.load("contrast_factor.npy")
    onnx_model_path = "tf.raw_ops.AdjustContrastv2_model.onnx"
    tf_model_path = "tf.raw_ops.AdjustContrastv2_frozen_graph.pb"
    
    class OnnxModel():
        def __init__(self, onnx_path):
            self.onnx_session = rt.InferenceSession(onnx_path)
            self.input_name = self.get_input_name(self.onnx_session)
            self.output_name = self.get_output_name(self.onnx_session)
    
        def get_output_name(self, onnx_session):
            output_name = []
            for node in onnx_session.get_outputs():
                output_name.append(node.name)
            return output_name
     
        def get_input_name(self, onnx_session):
            input_name = []
            for node in onnx_session.get_inputs():
                input_name.append(node.name)
            return input_name
     
        def get_input_feed(self, input_name, image_numpy):
            i = 0
            input_feed = {}
            for name in input_name:
                input_feed[name] = image_numpy[i]
                i += 1
            return input_feed
     
        def forward(self, numpy_list):
     
            input_feed = self.get_input_feed(self.input_name, numpy_list)
            output = self.onnx_session.run(self.output_name, input_feed=input_feed)
            return output
        
    def onnx_model_test(model_path, test_args):
        model = OnnxModel(model_path)
        return model.forward(test_args)[0]
    
    def wrap_frozen_graph(graph_def, inputs, outputs, print_graph=False):
        def _imports_graph_def():
            tf.compat.v1.import_graph_def(graph_def, name="")
    
        wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, [])
        import_graph = wrapped_import.graph
    
        return wrapped_import.prune(
            tf.nest.map_structure(import_graph.as_graph_element, inputs),
            tf.nest.map_structure(import_graph.as_graph_element, outputs))
    
    def tf_model_test(model_path, test_args):
        with tf.io.gfile.GFile(model_path, "rb") as f:
            graph_def = tf.compat.v1.GraphDef()
            loaded = graph_def.ParseFromString(f.read())
    
        # Wrap frozen graph to ConcreteFunctions
        frozen_func = wrap_frozen_graph(graph_def=graph_def,
                                        inputs=["x:0", "x_1:0"],
                                        outputs=["PartitionedCall/AdjustContrastv2:0"],
                                        print_graph=True)
        print("-" * 50)
        print("Frozen model inputs: ")
        print(frozen_func.inputs)
        print("Frozen model outputs: ")
        print(frozen_func.outputs)
        predictions = frozen_func(x=tf.convert_to_tensor(test_args[0]), x_1=tf.convert_to_tensor(test_args[1]))
        return predictions[0]
    
    res1 = onnx_model_test(onnx_model_path, (images, contrast_factor))
    
    res2 = tf_model_test(tf_model_path, (images, contrast_factor))
    
    np.testing.assert_allclose(res1, res2.numpy(), rtol=1e-4, atol=1e-4)
    

    Here are the results:

    --------------------------------------------------
    Frozen model inputs: 
    [<tf.Tensor 'x:0' shape=(3, 3, 3, 2) dtype=float32>, <tf.Tensor 'x_1:0' shape=() dtype=float32>]
    Frozen model outputs: 
    [<tf.Tensor 'PartitionedCall/AdjustContrastv2:0' shape=(3, 3, 3, 2) dtype=float32>]
    Traceback (most recent call last):
      File "debug-cross-framework.py", line 81, in <module>
        np.testing.assert_allclose(res1, res2.numpy(), rtol=1e-4, atol=1e-4)
      File "/lib/python3.8/site-packages/numpy/testing/_private/utils.py", line 1527, in assert_allclose
        assert_array_compare(compare, actual, desired, err_msg=str(err_msg),
      File "/lib/python3.8/site-packages/numpy/testing/_private/utils.py", line 844, in assert_array_compare
        raise AssertionError(msg)
    AssertionError: 
    Not equal to tolerance rtol=0.0001, atol=0.0001
    
    Mismatched elements: 54 / 54 (100%)
    Max absolute difference: 0.3644538
    Max relative difference: 0.28935865
     x: array([[[[1.789271, 2.70442 ],
             [3.019392, 1.692135],
             [2.755761, 3.188043]],...
     y: array([[[[1.91538 , 2.339967],
             [3.145502, 1.327681],
             [2.88187 , 2.823589]],...
    

    The commands for model conversion is:

    python -m tf2onnx.convert --saved-model {saved_tf_model_dir} --output {saved_onnx_model_path} --opset 17

    The conversion log is:

    2023-01-08 23:20:23.909106: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session
    2023-01-08 23:20:23,919 - INFO - Using tensorflow=2.11.0.dev20220905, onnx=1.12.0, tf2onnx=1.12.1/b6d590
    2023-01-08 23:20:23,920 - INFO - Using opset <onnx, 17>
    2023-01-08 23:20:23,921 - INFO - Computed 0 values for constant folding
    2023-01-08 23:20:23,925 - INFO - Optimizing ONNX model
    2023-01-08 23:20:23,937 - INFO - After optimization: Identity -2 (2->0)
    2023-01-08 23:20:23,938 - INFO - 
    2023-01-08 23:20:23,938 - INFO - Successfully converted TensorFlow model onnx_test/tf_model/ to ONNX
    2023-01-08 23:20:23,938 - INFO - Model inputs: ['args_0', 'args_1']
    2023-01-08 23:20:23,938 - INFO - Model outputs: ['output_0']
    2023-01-08 23:20:23,938 - INFO - ONNX model is saved at xxxx/tf.raw_ops.AdjustContrastv2_model.onnx
    

    Screenshots

    Tensorflow tf.raw_ops.AdjustContrastv2_frozen_graph.pb:

    tf raw_ops AdjustContrastv2_frozen_graph

    ONNX tf.raw_ops.AdjustContrastv2_model.onnx:

    tf raw_ops AdjustContrastv2_model

    Additional context

    bug 
    opened by enderdzz 0
  • onnxruntime.InferenceSession for String Tensors

    onnxruntime.InferenceSession for String Tensors

    Ask a Question

    Question

    I want to know how to check whether my onnx model is correct. My data is in dictionary with features in string or float, which I have used embeddinglookup to convert string to int tensor. I fail to use onnxruntime.InferenceSession to load onnx model since the input dtype is needed to be float or int. Can anyone help me load the onnx model and get outputs? Thank you so much!!

    Metal device set to: Apple M1 Pro
    
    systemMemory: 32.00 GB
    maxCacheSize: 10.67 GB
    
    2023-01-06 12:52:24.730929: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
    2023-01-06 12:52:24.731065: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
    Traceback (most recent call last):
      File "onnx_2_tf.py", line 36, in <module>
        sess = onnxruntime.InferenceSession('/Users/root/onnxModel1.onnx')
      File "/Users/root/miniconda3/envs/tf_to_onnx/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in __init__
        self._create_inference_session(providers, provider_options, disabled_optimizers)
      File "/Users/root/miniconda3/envs/tf_to_onnx/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 384, in _create_inference_session
        sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
    onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from /Users/onnxModel1.onnx failed:This is an invalid model. Type Error: Type 'tensor(string)' of input parameter (area) of operator (Equal) in node (StatefulPartitionedCall/model_3/vocab_layer_9/NotEqual) is invalid.
    
    question 
    opened by jasonxcx 0
  • TF Conv2D Grouped Unsupported ops: Counter({'PartitionedCall': 1})

    TF Conv2D Grouped Unsupported ops: Counter({'PartitionedCall': 1})

    Describe the bug Error when a model SavedModel in TF 2.10.0/2.11.0 that contain a TF Conv2D grouped is converted in ONNX with the API tf2onnx.convert.from_keras()

    01-05-2023_15:50:43][WARNING][load.py:load()::177] No training configuration found in save file, so the model was *not* compiled. Compile it manually.
    [01-05-2023_15:50:43][INFO][custom_load.py:load_saved_model()::50] Load the model SaveModel (.pb, variable, assets)
    [01-05-2023_15:50:43][INFO][tf_to_onnx.py:convert_savemodel_to_onnx()::75] Convert SavemMdel to ONNX
    [01-05-2023_15:50:43][INFO][tfonnx.py:process_tf_graph()::437] Using tensorflow=2.11.0, onnx=1.13.0, tf2onnx=1.13.0/2c1db5
    [01-05-2023_15:50:43][INFO][tfonnx.py:process_tf_graph()::439] Using opset <onnx, 17>
    [01-05-2023_15:50:43][INFO][tf_utils.py:compute_const_folding_using_tf()::281] Computed 0 values for constant folding
    [01-05-2023_15:50:43][INFO][tf_utils.py:compute_const_folding_using_tf()::281] Computed 0 values for constant folding
    [01-05-2023_15:50:43][ERROR][tfonnx.py:tensorflow_onnx_mapping()::263] Tensorflow op [dummymodel/dummymodel/conv_1/PartitionedCall: PartitionedCall] is not supported
    [01-05-2023_15:50:43][ERROR][tfonnx.py:process_parsed_graph()::625] Unsupported ops: Counter({'PartitionedCall': 1})
    [01-05-2023_15:50:43][INFO][__init__.py:optimize_graph()::48] Optimizing ONNX model
    [01-05-2023_15:50:43][INFO][__init__.py:optimize_graph()::83] After optimization: Cast -3 (3->0), Const -16 (31->15), Identity -2 (2->0), Reshape -3 (3->0), Transpose -21 (25->4)
    [01-05-2023_15:50:43][INFO][tf_to_onnx.py:main()::119] Conversion ONNX succeeded in /tmp/dummymodel/model/inference/dummymodel.onnx 
    Traceback (most recent call last):
      File "tf_to_onnx.py", line 136, in <module>
        main(sys.argv[1:])
      File "tf_to_onnx.py", line 122, in main
        onnx.checker.check_model(model)
      File "/usr/local/lib/python3.8/dist-packages/onnx/checker.py", line 119, in check_model
        C.check_model(protobuf_string, full_check)
    onnx.onnx_cpp2py_export.checker.ValidationError: No Op registered for PartitionedCall with domain_version of 17
    
    ==> Context: Bad node spec for node. Name: dummymodel/dummymodel/conv_1/PartitionedCall OpType: PartitionedCall
    
    

    System information

    • OS Platform and Distribution : Ubuntu 20.04
    • TensorFlow Version: 2.10.1/2.11.0
    • Python version: 3.8
    • ONNX version : 1.13.0
    • ONNXRuntime version : 1.11.0
    • ONNX OPSET: 15/16/17

    To Reproduce The dummy model to reproduce the error, the error appears when "conv1" groups!=1:

    import tensorflow as tf
    
    def dummymodel(input_shape, num_classes:int, depth_multiplier:int=1, is_dropout:float=0.0):
        input_tensor = tf.keras.layers.Input(shape=input_shape, name="input")
        input_shape = input_tensor.get_shape().as_list()[1:3]
        x = tf.keras.layers.SeparableConv2D(
            filters=16,
            depth_multiplier=depth_multiplier,
            kernel_size=3,
            strides=2,
            name="convSep1",
        )(input_tensor)
        x = tf.keras.layers.BatchNormalization(name="batch1")(x)
        x_add_1 = tf.keras.layers.Activation("relu", name="relu1")(x)
        x = tf.keras.layers.DepthwiseConv2D(
            kernel_size=3,
            strides=(3, 3),
            depth_multiplier=depth_multiplier,
            name="convDepth1",
        )(x_add_1)
        x = tf.keras.layers.AveragePooling2D(pool_size=(2, 2), name="avgPooling")(x)
        x_concat_1 = tf.keras.layers.UpSampling2D(size=(2, 2), interpolation="nearest", name="output")(x)
        x_concat_2 = tf.keras.layers.Conv2D(
            filters=16,
            kernel_size=(3, 3),
            strides=3,
            data_format=None,
            groups=4,
            activation=None,
            use_bias=True,
            name="conv_1",
        )(x_add_1)
        x = tf.keras.layers.Concatenate(axis=-1, name="concat")([x_concat_1, x_concat_2])
        x = tf.keras.layers.Lambda(
            lambda img: tf.image.resize(
                images=img, size=(input_shape[0], input_shape[1]), method="bilinear"
            ),
            name="resizeBilinear",
        )(x)
        x_add_2 = tf.keras.layers.SeparableConv2D(
            filters=16,
            depth_multiplier=depth_multiplier,
            kernel_size=3,
            strides=2,
            name="convSep2",
        )(x)
        x = tf.keras.layers.Add(name="add")([x_add_1, x_add_2])
        if is_dropout:
            x = tf.keras.layers.Dropout(rate=0.2, name="dropout")(x)
        x = tf.keras.layers.Conv2D(filters=num_classes, kernel_size=3, strides=2, name="conv2")(x)
        output = tf.keras.layers.UpSampling2D(size=(4, 4), interpolation="bilinear", name="output_")(x)
        return tf.keras.Model(
            inputs=input_tensor, outputs={"output_": output}, name="dummymodel"
        )
    
    if __name__ == "__main__":
        import os
    
        model = dummymodel(
            input_shape=(1024, 1024, 3),
            num_classes=3,
            depth_multiplier=1,
            is_dropout=0.0,
        )
        model.summary(line_length=250)
        tf.keras.models.save_model(
            model, os.path.join("/tmp", "dummymodel"), include_optimizer=False
        )
    

    The conversion script :

    import onnx
    import tensorflow as tf
    import tf2onnx
    
    
    if __name__ == "__main__":
        savedmodel_path="/tmp/dummymodel/"
        onnx_saved_path = "/tmp/dummymodel.onnx"
        model = tf.keras.models.load_model(savedmodel_path, compile=False)
        shape = model.input.get_shape()
        input_image = tf.keras.layers.Input(shape=(shape[1], shape[2], shape[3]), name="input_")
        spec = (tf.TensorSpec(shape, tf.float32, name="input_"),)
        print("Convert")
        _, _ = tf2onnx.convert.from_keras(
            model,
            input_signature=spec,
            opset=17,
            output_path=onnx_saved_path,
        )
        print("Check")
        model = onnx.load(onnx_saved_path)
        onnx.checker.check_model(model)
    

    Screenshot Issue for the conversion of the layer dummymodel/dummymodel/conv_1/PartitionedCall which is the Conv2d grouped image

    bug 
    opened by RicoOscar 0
  • Error while execution of the file

    Error while execution of the file

    I am getting following error. Please help me resolve it `python -m tf2onnx.convert --tflite /home/sdr/Documents/sdr.h5 --output model.onnx

    /home/sdr/miniconda3/envs/tfonnx/lib/python3.10/runpy.py:126: RuntimeWarning: 'tf2onnx.convert' found in sys.modules after import of package 'tf2onnx', but prior to execution of 'tf2onnx.convert'; this may result in unpredictable behaviour

    warn(RuntimeWarning(msg))

    2022-12-25 18:20:29,461 - INFO - Using tensorflow=2.9.2, onnx=1.13.0, tf2onnx=1.13.0/434b4a

    2022-12-25 18:20:29,461 - INFO - Using opset <onnx, 13>

    Traceback (most recent call last):

    File "/home/sdr/miniconda3/envs/tfonnx/lib/python3.10/runpy.py", line 196, in _run_module_as_main

    return _run_code(code, main_globals, None,
    

    File "/home/sdr/miniconda3/envs/tfonnx/lib/python3.10/runpy.py", line 86, in _run_code

    exec(code, run_globals)
    

    File "/home/sdr/miniconda3/envs/tfonnx/lib/python3.10/site-packages/tf2onnx/convert.py", line 706, in

    main()
    

    File "/home/sdr/miniconda3/envs/tfonnx/lib/python3.10/site-packages/tf2onnx/convert.py", line 269, in main

    model_proto, _ = _convert_common(
    

    File "/home/sdr/miniconda3/envs/tfonnx/lib/python3.10/site-packages/tf2onnx/convert.py", line 164, in _convert_common

    g = process_tf_graph(tf_graph, const_node_values=const_node_values,
    

    File "/home/sdr/miniconda3/envs/tfonnx/lib/python3.10/site-packages/tf2onnx/tfonnx.py", line 453, in process_tf_graph

    main_g, subgraphs = graphs_from_tflite(tflite_path, input_names, output_names)
    

    File "/home/sdr/miniconda3/envs/tfonnx/lib/python3.10/site-packages/tf2onnx/tflite_utils.py", line 143, in graphs_from_tflite

    tflite_graphs, opcodes, model, tensor_shapes = read_tflite_model(tflite_path)
    

    File "/home/sdr/miniconda3/envs/tfonnx/lib/python3.10/site-packages/tf2onnx/tflite_utils.py", line 184, in read_tflite_model

    for i in range(model.OperatorCodesLength()):
    

    File "/home/sdr/miniconda3/envs/tfonnx/lib/python3.10/site-packages/tf2onnx/tflite/Model.py", line 55, in OperatorCodesLength

    o = flatbuffers.number_types.UOffsetTFlags.py_type(self._tab.Offset(6))
    

    File "/home/sdr/miniconda3/envs/tfonnx/lib/python3.10/site-packages/flatbuffers/table.py", line 37, in Offset

    vtable = self.Pos - self.Get(N.SOffsetTFlags, self.Pos)
    

    File "/home/sdr/miniconda3/envs/tfonnx/lib/python3.10/site-packages/flatbuffers/table.py", line 93, in Get

    return flags.py_type(encode.Get(flags.packer_type, self.Bytes, off))
    

    File "/home/sdr/miniconda3/envs/tfonnx/lib/python3.10/site-packages/flatbuffers/encode.py", line 26, in Get

    return packer_type.unpack_from(memoryview_type(buf), head)[0]
    

    struct.error: unpack_from requires a buffer of at least 1178880141 bytes for unpacking 4 bytes at offset 1178880137 (actual buffer size is 1668256)`

    opened by nishant2019 0
  • Different results between TensorFlow model and ONNX model.

    Different results between TensorFlow model and ONNX model.

    Hi,

    I converted a TensorFlow model to an ONNX model:

    spec = (tf.TensorSpec((None, 256), tf.int32, name="input_ids"),)
    tf2onnx.convert.from_keras(model, output_path='model_biomarker.onnx', input_signature=spec)
    

    However, when I make an inference on the ONNX model, the output is different from what I get from the TensorFlow model.

    Could anyone help me why there is a difference between the TensorFlow model and ONNX model output?

    Thanks!

    System information

    • OS Platform and Distribution (e.g., Linux Ubuntu 18.04*): macOS Montery 12.3
    • TensorFlow Version: 2.4.1
    • Python version: 3.9.7
    • ONNX version (if applicable, e.g. 1.11*): 1.13.0
    • ONNXRuntime version (if applicable, e.g. 1.11*): 1.13.1
    bug 
    opened by chielingyueh 0
Releases(v1.13.0)
Owner
Open Neural Network Exchange
ONNX is an open ecosystem for interoperable AI models. It's a community project: we welcome your contributions!
Open Neural Network Exchange
Repo for parser tensorflow(.pb) and tflite(.tflite)

tfmodel_parser .pb file is the format of tensorflow model .tflite file is the format of tflite model, which usually used in mobile devices before star

null 1 Dec 23, 2021
Quantized tflite models for ailia TFLite Runtime

ailia-models-tflite Quantized tflite models for ailia TFLite Runtime About ailia TFLite Runtime ailia TF Lite Runtime is a TensorFlow Lite compatible

ax Inc. 13 Dec 23, 2022
Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Roxbili 5 Nov 19, 2022
Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite and .pb from .tflite.

tflite2tensorflow Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite and .pb from .tflite. 1. Supported Layers No. TFLite Layer TF

Katsuya Hyodo 214 Dec 29, 2022
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.

MMdnn MMdnn is a comprehensive and cross-framework tool to convert, visualize and diagnose deep learning (DL) models. The "MM" stands for model manage

Microsoft 5.7k Jan 9, 2023
YOLOv5 in PyTorch > ONNX > CoreML > TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices evolved over thousands of hours of training and evolution on anonymized client datasets. All code and models are under active development, and are subject to modification or deletion without notice.

Ultralytics 34.1k Dec 31, 2022
YOLOv3 in PyTorch > ONNX > CoreML > TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices

Ultralytics 9.3k Jan 7, 2023
WHENet - ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L

HeadPoseEstimation-WHENet-yolov4-onnx-openvino ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L 1. Usage $ git clone htt

Katsuya Hyodo 49 Sep 21, 2022
A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for ONNX.

sam4onnx A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for

Katsuya Hyodo 6 May 15, 2022
An executor that loads ONNX models and embeds documents using the ONNX runtime.

ONNXEncoder An executor that loads ONNX models and embeds documents using the ONNX runtime. Usage via Docker image (recommended) from jina import Flow

Jina AI 2 Mar 15, 2022
Simple tool to combine(merge) onnx models. Simple Network Combine Tool for ONNX.

snc4onnx Simple tool to combine(merge) onnx models. Simple Network Combine Tool for ONNX. https://github.com/PINTO0309/simple-onnx-processing-tools 1.

Katsuya Hyodo 8 Oct 13, 2022
Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

scc4onnx Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel

Katsuya Hyodo 16 Dec 22, 2022
ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS.

ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS. It currently supports four examples for you to quickly experience the power of ONNX Runtime Web.

Microsoft 58 Dec 18, 2022
ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

Ibai Gorordo 18 Nov 6, 2022
ONNX-PackNet-SfM: Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX

Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX

Ibai Gorordo 14 Dec 9, 2022
A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or simply to separate onnx files to any size you want.

sne4onnx A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or

Katsuya Hyodo 10 Aug 30, 2022
Simple ONNX operation generator. Simple Operation Generator for ONNX.

sog4onnx Simple ONNX operation generator. Simple Operation Generator for ONNX. https://github.com/PINTO0309/simple-onnx-processing-tools Key concept V

Katsuya Hyodo 6 May 15, 2022
Convert onnx models to pytorch.

onnx2torch onnx2torch is an ONNX to PyTorch converter. Our converter: Is easy to use – Convert the ONNX model with the function call convert; Is easy

ENOT 264 Dec 30, 2022
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Peter Lin 6.5k Jan 4, 2023