Convert ONNX model graph to Keras model format.

Overview

onnx2keras

ONNX to Keras deep neural network converter.

GitHub License Python Version Downloads PyPI

Requirements

TensorFlow 2.0

API

onnx_to_keras(onnx_model, input_names, input_shapes=None, name_policy=None, verbose=True, change_ordering=False) -> {Keras model}

onnx_model: ONNX model to convert

input_names: list with graph input names

input_shapes: override input shapes (experimental)

name_policy: ['renumerate', 'short', 'default'] override layer names (experimental)

verbose: detailed output

change_ordering: change ordering to HWC (experimental)

Getting started

ONNX model

import onnx
from onnx2keras import onnx_to_keras

# Load ONNX model
onnx_model = onnx.load('resnet18.onnx')

# Call the converter (input - is the main model input name, can be different for your model)
k_model = onnx_to_keras(onnx_model, ['input'])

Keras model will be stored to the k_model variable. So simple, isn't it?

PyTorch model

Using ONNX as intermediate format, you can convert PyTorch model as well.

import numpy as np
import torch
from torch.autograd import Variable
from pytorch2keras.converter import pytorch_to_keras
import torchvision.models as models

if __name__ == '__main__':
    input_np = np.random.uniform(0, 1, (1, 3, 224, 224))
    input_var = Variable(torch.FloatTensor(input_np))
    model = models.resnet18()
    model.eval()
    k_model = \
        pytorch_to_keras(model, input_var, [(3, 224, 224,)], verbose=True, change_ordering=True)

    for i in range(3):
        input_np = np.random.uniform(0, 1, (1, 3, 224, 224))
        input_var = Variable(torch.FloatTensor(input_np))
        output = model(input_var)
        pytorch_output = output.data.numpy()
        keras_output = k_model.predict(np.transpose(input_np, [0, 2, 3, 1]))
        error = np.max(pytorch_output - keras_output)
        print('error -- ', error)  # Around zero :)

Deplying model as frozen graph

You can try using the snippet below to convert your onnx / PyTorch model to frozen graph. It may be useful for deploy for Tensorflow.js / for Tensorflow for Android / for Tensorflow C-API.

import numpy as np
import torch
from pytorch2keras.converter import pytorch_to_keras
from torch.autograd import Variable
import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2


# Create and load model
model = Model()
model.load_state_dict(torch.load('model-checkpoint.pth'))
model.eval()

# Make dummy variables (and checking if the model works)
input_np = np.random.uniform(0, 1, (1, 3, 224, 224))
input_var = Variable(torch.FloatTensor(input_np))
output = model(input_var)

# Convert the model!
k_model = \
    pytorch_to_keras(model, input_var, (3, 224, 224), 
                     verbose=True, name_policy='short',
                     change_ordering=True)

# Save model to SavedModel format
tf.saved_model.save(k_model, "./models")

# Convert Keras model to ConcreteFunction
full_model = tf.function(lambda x: k_model(x))
full_model = full_model.get_concrete_function(
    tf.TensorSpec(k_model.inputs[0].shape, k_model.inputs[0].dtype))

# Get frozen ConcreteFunction
frozen_func = convert_variables_to_constants_v2(full_model)
frozen_func.graph.as_graph_def()

print("-" * 50)
print("Frozen model layers: ")
for layer in [op.name for op in frozen_func.graph.get_operations()]:
    print(layer)

print("-" * 50)
print("Frozen model inputs: ")
print(frozen_func.inputs)
print("Frozen model outputs: ")
print(frozen_func.outputs)

# Save frozen graph from frozen ConcreteFunction to hard drive
tf.io.write_graph(graph_or_graph_def=frozen_func.graph,
                  logdir="./frozen_models",
                  name="frozen_graph.pb",
                  as_text=False)

License

This software is covered by MIT License.

Comments
  • change_ordering flag not working on a VGG19 pre-trained model

    change_ordering flag not working on a VGG19 pre-trained model

    I am trying to convert a pre-trained VGG19 model from ONNX to Keras and then run it on my CPU (run = just predict, not train).

    I managed to convert it with onnx2keras but then ran into issues with NCHW channel_first (3, 224, 224) v. NHWC channel_last (224, 224, 3). Here is the error I get when running the converted model with Keras (TensorFlow backend):

    tensorflow.python.framework.errors_impl.InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU
    

    I went back to onnx2keras and realized there is an experimental flag change_ordering that seems to do what I needed!

    Unfortunately, change_ordering does not seem to work with my model:

    ValueError: Operands could not be broadcast together with shapes (224, 224, 3) (3, 224, 224)
    

    Here is the full stack trace:

    ValueError                                Traceback (most recent call last)
    <ipython-input-1-3a0e0ae82f31> in <module>()
          8 
          9 
    ---> 10 keras_m = onnx_to_keras(onnx_m, ['input'], verbose=True, change_ordering=True)
    
    ~/Documents/dev/venv/lib/python3.7/site-packages/onnx2keras/converter.py in onnx_to_keras(onnx_model, input_names, input_shapes, name_policy, verbose, change_ordering)
        212
        213 
    --> 214         model_tf_ordering = keras.models.Model.from_config(conf)
        215 
        216         for dst_layer, src_layer in zip(model_tf_ordering.layers,
    
    ~/Documents/dev/venv/lib/python3.7/site-packages/keras/engine/network.py in from_config(cls, config, custom_objects)
       1030                 if layer in unprocessed_nodes:
       1031                     for node_data in unprocessed_nodes.pop(layer):
    -> 1032                         process_node(layer, node_data)
       1033 
       1034         name = config.get('name')
    
    ~/Documents/dev/venv/lib/python3.7/site-packages/keras/engine/network.py in process_node(layer, node_data)
        989             # and building the layer if needed.
        990             if input_tensors:
    --> 991                 layer(unpack_singleton(input_tensors), **kwargs)
        992 
        993         def process_layer(layer_data):
    
    ~/Documents/dev/venv/lib/python3.7/site-packages/keras/engine/base_layer.py in __call__(self, inputs, **kwargs)
        429                                          'You can build it manually via: '
        430                                          '`layer.build(batch_input_shape)`')
    --> 431                 self.build(unpack_singleton(input_shapes))
        432                 self.built = True
        433 
    
    ~/Documents/dev/venv/lib/python3.7/site-packages/keras/layers/merge.py in build(self, input_shape)
        254 
        255     def build(self, input_shape):
    --> 256         super(Subtract, self).build(input_shape)
        257         if len(input_shape) != 2:
        258             raise ValueError('A `Subtract` layer should be called '
    
    ~/Documents/dev/venv/lib/python3.7/site-packages/keras/layers/merge.py in build(self, input_shape)
         89                 shape = input_shape[i][1:]
         90             output_shape = self._compute_elemwise_op_output_shape(output_shape,
    ---> 91                                                                   shape)
         92         # If the inputs have different ranks, we have to reshape them
         93         # to make them broadcastable.
    
    ~/Documents/dev/venv/lib/python3.7/site-packages/keras/layers/merge.py in _compute_elemwise_op_output_shape(self, shape1, shape2)
         59                     raise ValueError('Operands could not be broadcast '
         60                                      'together with shapes ' +
    ---> 61                                      str(shape1) + ' ' + str(shape2))
         62                 output_shape.append(i)
         63         return tuple(output_shape)
    
    ValueError: Operands could not be broadcast together with shapes (224, 224, 3) (3, 224, 224)
    

    I am using python3.7 and (all from pip): onnx==1.5.0, onnx2keras==0.0.4, Keras==2.2.4, and tensorflow==1.14.0.

    Here is the code that gets me the stack trace above (on a Jupyter Notebook):

    import onnx
    from onnx2keras import onnx_to_keras
    onnx_m = onnx.load('VGG19_PRETRAINED.onnx')
    keras_m = onnx_to_keras(onnx_m, ['input'], verbose=True, change_ordering=True)
    

    @nerox8664, thanks a lot for this (very useful!) library. I do appreciate any hints you might have about my issue.

    opened by lucasrla 10
  • InvalidArgumentError:  Default MaxPoolingOp only supports NHWC on device type CPU. Why this happen after onnx2keras?

    InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU. Why this happen after onnx2keras?

    Hi again,

    I have done this steps:

    onnx_model = onnx.load(FILE_PATH+"mnist_test.onnx")
    k_model_onnx = onnx_to_keras(onnx_model, ['input_1'], name_policy="short")
    k_model_onnx.summary()
    
    Model: "model"
    __________________________________________________________________________________________________
    Layer (type)                    Output Shape         Param #     Connected to                     
    ==================================================================================================
    input_1 (InputLayer)            [(None, 28, 28, 1)]  0                                            
    __________________________________________________________________________________________________
    adjusted (Permute)              (None, 1, 28, 28)    0           input_1[0][0]                    
    __________________________________________________________________________________________________
    convolut (Conv2D)               (None, 32, 26, 26)   320         adjusted[0][0]                   
    __________________________________________________________________________________________________
    conv2d/I (Activation)           (None, 32, 26, 26)   0           convolut[0][0]                   
    __________________________________________________________________________________________________
    convolut_1 (Conv2D)             (None, 64, 24, 24)   18496       conv2d/I[0][0]                   
    __________________________________________________________________________________________________
    conv2d_1 (Activation)           (None, 64, 24, 24)   0           convolut_1[0][0]                 
    __________________________________________________________________________________________________
    conv2d_1_1_pad (ZeroPadding2D)  (None, 64, 24, 24)   0           conv2d_1[0][0]                   
    __________________________________________________________________________________________________
    conv2d_1_1 (MaxPooling2D)       (None, 64, 12, 12)   0           conv2d_1_1_pad[0][0]             
    __________________________________________________________________________________________________
    conv2d_1_2 (Permute)            (None, 12, 12, 64)   0           conv2d_1_1[0][0]                 
    __________________________________________________________________________________________________
    flatten/ (Reshape)              (None, None)         0           conv2d_1_2[0][0]                 
    __________________________________________________________________________________________________
    transfor_reshape (Reshape)      (None, 9216)         0           flatten/[0][0]                   
    __________________________________________________________________________________________________
    transfor (Dense)                (None, 128)          1179648     transfor_reshape[0][0]           
    __________________________________________________________________________________________________
    biased_t_const2 (Lambda)        (128,)               0           input_1[0][0]                    
    __________________________________________________________________________________________________
    biased_t (Lambda)               (None, 128)          0           transfor[0][0]                   
                                                                     biased_t_const2[0][0]            
    __________________________________________________________________________________________________
    dense/Id (Activation)           (None, 128)          0           biased_t[0][0]                   
    __________________________________________________________________________________________________
    transfor_1 (Dense)              (None, 10)           1280        dense/Id[0][0]                   
    __________________________________________________________________________________________________
    biased_t_1_const2 (Lambda)      (10,)                0           input_1[0][0]                    
    __________________________________________________________________________________________________
    biased_t_1 (Lambda)             (None, 10)           0           transfor_1[0][0]                 
                                                                     biased_t_1_const2[0][0]          
    __________________________________________________________________________________________________
    dense_1/ (Activation)           (None, 10)           0           biased_t_1[0][0]                 
    ==================================================================================================
    Total params: 1,199,744
    Trainable params: 1,199,744
    Non-trainable params: 0
    __________________________________________________________________________________________________
    
    y_pred_onnx = k_model_onnx.predict(x_test)
    

    Result :

    Tensor("model/transfor/MatMul:0", shape=(None, 128), dtype=float32) Tensor("model/biased_t_const2/Const:0", shape=(128,), dtype=float32)
    Tensor("model/transfor_1/MatMul:0", shape=(None, 10), dtype=float32) Tensor("model/biased_t_1_const2/Const:0", shape=(10,), dtype=float32)
    ---------------------------------------------------------------------------
    InvalidArgumentError                      Traceback (most recent call last)
    <ipython-input-16-c199c87d14d2> in <module>()
    ----> 1 y_pred_onnx = k_model_onnx.predict(x_test)
    
    7 frames
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
         58     ctx.ensure_initialized()
         59     tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
    ---> 60                                         inputs, attrs, num_outputs)
         61   except core._NotOkStatusException as e:
         62     if name is not None:
    
    InvalidArgumentError:  Default MaxPoolingOp only supports NHWC on device type CPU
    	 [[node model/conv2d_1_1/MaxPool (defined at <ipython-input-16-c199c87d14d2>:1) ]] [Op:__inference_predict_function_2104]
    
    Function call stack:
    predict_function
    

    I have no idea what this mean and why it has happen. Thanks and sorry!

    opened by M-Tonin 8
  • Mobilenetv2 conversion from pytorch doesnt work with chnage_ordering=True

    Mobilenetv2 conversion from pytorch doesnt work with chnage_ordering=True

    @nerox8664 Getting exception 'list' object has no attribute 'shape'

    At https://github.com/nerox8664/onnx2keras/blob/master/onnx2keras/converter.py#L220

    Any suggestion how to fix ?

    opened by jkparuchuri 6
  • TypeError: unhashable type: 'google.protobuf.pyext._message.RepeatedScalarContaine'

    TypeError: unhashable type: 'google.protobuf.pyext._message.RepeatedScalarContaine'

    I try to convert an ONNX model to Keras, but when I call the conversion function I receive the following error message "TypeError: unhashable type: 'google.protobuf.pyext._message.RepeatedScalarContainer'"

    You can see the ONNX Model here: https://ibb.co/sKnbxWY

    import onnx2keras
    from onnx2keras import onnx_to_keras
    import keras
    import onnx
    
    onnx_model = onnx.load('onnxModel.onnx')
    k_model = onnx_to_keras(onnx_model, ['input_1'])
    
    keras.models.save_model(k_model,'kerasModel.h5',overwrite=True,include_optimizer=True)
    
    File "C:/../onnx2Keras.py", line 7, in <module>
        k_model = onnx_to_keras(onnx_model, ['input_1'])
      File "..\site-packages\onnx2keras\converter.py", line 80, in onnx_to_keras
        weights[onnx_extracted_weights_name] = numpy_helper.to_array(onnx_w)
    TypeError: unhashable type: 'google.protobuf.pyext._message.RepeatedScalarContainer'
    
    opened by emanuelcovaci 6
  • DETR KeyError: 'axes' and KeyError: '_outputs'

    DETR KeyError: 'axes' and KeyError: '_outputs'

    I am trying to convert a DETR model from pytorch to Keras. The process has to parts:

    1. From Pytorch to onnx
    2. From onnx to keras

    The first part works fine and for the second part (using your onnx2keras), I get errors like KeyError: 'axes' and KeyError: '_outputs'. The part responsible for generating the params of the node is this:

    node_params = onnx_node_attributes_to_dict(node.attribute)

    I checked the node attributes, and they are not there. I was wondering if you can shed some light on why is this happening?

    Thank you.

    opened by ktobah 4
  • Missing converter for LRN

    Missing converter for LRN

    After trying to convert the AlexNet model from onnx (found here), I got the following error:

    KeyError                                  Traceback (most recent call last)
     in 
          1 onnx_model = onnx.load('bvlcalexnet-9.onnx')
          2 print("onnx model loaded")
    ----> 3 model = onnx_to_keras(onnx_model, ['data_0'])
    
    ~/.local/lib/python3.8/site-packages/onnx2keras/converter.py in onnx_to_keras(onnx_model, input_names, input_shapes, name_policy, verbose, change_ordering)
        172             logger.debug('... found all, continue')
        173 
    --> 174         AVAILABLE_CONVERTERS[node_type](
        175             node,
        176             node_params,
    
    KeyError: 'LRN'
    

    Any ideas on how to get this to work?

    opened by igormp 4
  • Add partial support for 3D related conv/pooling/padding

    Add partial support for 3D related conv/pooling/padding

    Hi Nice work, and I found it quite useful. I would like to push some changes for 3d conv/padding/pooling. Not all functions are support but is sufficient for my purpose so far.

    Hope you can accept the request. Cheers,

    opened by jiayiliu 4
  • The transferred keras file has extra padding layer

    The transferred keras file has extra padding layer

    image

    The original onnx file didn't have padding layer, but the extra padding layer was transferred out. And onnx file is transferred from pytorch.

    image

    Hope for help~

    opened by pawopawo 4
  • AttributeError: 'ParsedRequirement' object has no attribute 'req'

    AttributeError: 'ParsedRequirement' object has no attribute 'req'

    I'm trying to install onnx2keras module but while installing I got error. I tried installing using both pip and source directory. Error is always same. Is there any dependency I've to install?

    Traceback (most recent call last): File "setup.py", line 16, in <module> reqs = [str(ir.req) for ir in install_reqs] File "setup.py", line 16, in <listcomp> reqs = [str(ir.req) for ir in install_reqs] AttributeError: 'ParsedRequirement' object has no attribute 'req'

    opened by pranv12 3
  • ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type NoneType).

    ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type NoneType).

    Hello, I'm trying to convert my pytorch model to keras and I have ready onnx file for It. But when I started to converting onnx to keras, I've got next error:

    DEBUG:onnx2keras:Check if all inputs are available:
    DEBUG:onnx2keras:Check input 0 (name 645).
    DEBUG:onnx2keras:Check input 1 (name 646).
    DEBUG:onnx2keras:... found all, continue
    DEBUG:onnx2keras:mul:Convert inputs to Keras/TF layers if needed.
    WARNING:onnx2keras:mul:Failed to use keras.layers.Multiply. Fallback to TF lambda.
    WARNING:tensorflow:Layer 647 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.
    
    If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
    
    To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
    
    WARNING:tensorflow:Layer 647 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.
    
    If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
    
    To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
    
    DEBUG:onnx2keras:######
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Converting ONNX operation
    DEBUG:onnx2keras:type: Cast
    DEBUG:onnx2keras:node_name: 648
    DEBUG:onnx2keras:node_params: {'to': 7, 'change_ordering': False, 'name_policy': None}
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Check if all inputs are available:
    DEBUG:onnx2keras:Check input 0 (name 647).
    DEBUG:onnx2keras:... found all, continue
    DEBUG:onnx2keras:######
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Converting ONNX operation
    DEBUG:onnx2keras:type: Cast
    DEBUG:onnx2keras:node_name: 649
    DEBUG:onnx2keras:node_params: {'to': 11, 'change_ordering': False, 'name_policy': None}
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Check if all inputs are available:
    DEBUG:onnx2keras:Check input 0 (name 648).
    DEBUG:onnx2keras:... found all, continue
    DEBUG:onnx2keras:######
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Converting ONNX operation
    DEBUG:onnx2keras:type: Constant
    DEBUG:onnx2keras:node_name: 650
    DEBUG:onnx2keras:node_params: {'value': array(1.), 'change_ordering': False, 'name_policy': None}
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Check if all inputs are available:
    DEBUG:onnx2keras:... found all, continue
    DEBUG:onnx2keras:######
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Converting ONNX operation
    DEBUG:onnx2keras:type: Div
    DEBUG:onnx2keras:node_name: 651
    DEBUG:onnx2keras:node_params: {'change_ordering': False, 'name_policy': None}
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Check if all inputs are available:
    DEBUG:onnx2keras:Check input 0 (name 650).
    DEBUG:onnx2keras:Check input 1 (name 649).
    DEBUG:onnx2keras:... found all, continue
    DEBUG:onnx2keras:div:Convert inputs to Keras/TF layers if needed.
    WARNING:tensorflow:Layer 651 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.
    
    If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
    
    To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
    
    WARNING:tensorflow:Layer 651 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.
    
    If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
    
    To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
    
    DEBUG:onnx2keras:######
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Converting ONNX operation
    DEBUG:onnx2keras:type: Constant
    DEBUG:onnx2keras:node_name: 652
    DEBUG:onnx2keras:node_params: {'value': array(224.), 'change_ordering': False, 'name_policy': None}
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Check if all inputs are available:
    DEBUG:onnx2keras:... found all, continue
    DEBUG:onnx2keras:######
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Converting ONNX operation
    DEBUG:onnx2keras:type: Mul
    DEBUG:onnx2keras:node_name: 653
    DEBUG:onnx2keras:node_params: {'change_ordering': False, 'name_policy': None}
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Check if all inputs are available:
    DEBUG:onnx2keras:Check input 0 (name 651).
    DEBUG:onnx2keras:Check input 1 (name 652).
    DEBUG:onnx2keras:... found all, continue
    DEBUG:onnx2keras:mul:Convert inputs to Keras/TF layers if needed.
    WARNING:onnx2keras:mul:Failed to use keras.layers.Multiply. Fallback to TF lambda.
    WARNING:tensorflow:Layer 653 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.
    
    If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
    
    To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
    
    WARNING:tensorflow:Layer 653 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.
    
    If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
    
    To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
    
    DEBUG:onnx2keras:######
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Converting ONNX operation
    DEBUG:onnx2keras:type: Cast
    DEBUG:onnx2keras:node_name: 654
    DEBUG:onnx2keras:node_params: {'to': 7, 'change_ordering': False, 'name_policy': None}
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Check if all inputs are available:
    DEBUG:onnx2keras:Check input 0 (name 653).
    DEBUG:onnx2keras:... found all, continue
    DEBUG:onnx2keras:######
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Converting ONNX operation
    DEBUG:onnx2keras:type: Mul
    DEBUG:onnx2keras:node_name: 655
    DEBUG:onnx2keras:node_params: {'change_ordering': False, 'name_policy': None}
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Check if all inputs are available:
    DEBUG:onnx2keras:Check input 0 (name 654).
    DEBUG:onnx2keras:Check input 1 (name 654).
    DEBUG:onnx2keras:... found all, continue
    DEBUG:onnx2keras:mul:Convert inputs to Keras/TF layers if needed.
    WARNING:onnx2keras:mul:Failed to use keras.layers.Multiply. Fallback to TF lambda.
    DEBUG:onnx2keras:######
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Converting ONNX operation
    DEBUG:onnx2keras:type: Unsqueeze
    DEBUG:onnx2keras:node_name: 657
    DEBUG:onnx2keras:node_params: {'axes': [0], 'change_ordering': False, 'name_policy': None}
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Check if all inputs are available:
    DEBUG:onnx2keras:Check input 0 (name 639).
    DEBUG:onnx2keras:... found all, continue
    DEBUG:onnx2keras:unsqueeze:Work with numpy types.
    DEBUG:onnx2keras:######
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Converting ONNX operation
    DEBUG:onnx2keras:type: Unsqueeze
    DEBUG:onnx2keras:node_name: 659
    DEBUG:onnx2keras:node_params: {'axes': [0], 'change_ordering': False, 'name_policy': None}
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Check if all inputs are available:
    DEBUG:onnx2keras:Check input 0 (name 655).
    DEBUG:onnx2keras:... found all, continue
    DEBUG:onnx2keras:######
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Converting ONNX operation
    DEBUG:onnx2keras:type: Concat
    DEBUG:onnx2keras:node_name: 660
    DEBUG:onnx2keras:node_params: {'axis': 0, 'change_ordering': False, 'name_policy': None}
    DEBUG:onnx2keras:...
    DEBUG:onnx2keras:Check if all inputs are available:
    DEBUG:onnx2keras:Check input 0 (name 657).
    DEBUG:onnx2keras:Check input 1 (name 1057).
    DEBUG:onnx2keras:The input not found in layers / model inputs.
    DEBUG:onnx2keras:Found in weights, add as a numpy constant.
    DEBUG:onnx2keras:Check input 2 (name 659).
    DEBUG:onnx2keras:... found all, continue
    DEBUG:onnx2keras:concat:Concat Keras layers.
    WARNING:onnx2keras:concat:!!! IMPORTANT INFORMATION !!!
    WARNING:onnx2keras:concat:Something goes wrong with concat layers. Will use TF fallback.
    WARNING:onnx2keras:concat:---
    Traceback (most recent call last):
      File "C:\Users\1\Anaconda3\lib\site-packages\onnx2keras\reshape_layers.py", line 110, in convert_concat
        name=keras_name)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\keras\layers\merge.py", line 705, in concatenate
        return Concatenate(axis=axis, **kwargs)(inputs)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 745, in __call__
        inputs = nest.map_structure(_convert_non_tensor, inputs)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\util\nest.py", line 535, in map_structure
        structure[0], [func(*x) for x in entries],
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\util\nest.py", line 535, in <listcomp>
        structure[0], [func(*x) for x in entries],
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 743, in _convert_non_tensor
        return ops.convert_to_tensor(x)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1184, in convert_to_tensor
        return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1242, in convert_to_tensor_v2
        as_ref=False)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1296, in internal_convert_to_tensor
        ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\tensor_conversion_registry.py", line 52, in _default_conversion_function
        return constant_op.constant(value, dtype, name=name)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 227, in constant
        allow_broadcast=True)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 235, in _constant_impl
        t = convert_to_eager_tensor(value, ctx, dtype)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 96, in convert_to_eager_tensor
        return ops.EagerTensor(value, ctx.device_name, dtype)
    ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type NoneType).
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "C:/Users/1/PycharmProjects/untitled/onnx_.py", line 6, in <module>
        k_model = onnx2keras.onnx_to_keras(model, ["input_data"])
      File "C:\Users\1\Anaconda3\lib\site-packages\onnx2keras\converter.py", line 177, in onnx_to_keras
        keras_names
      File "C:\Users\1\Anaconda3\lib\site-packages\onnx2keras\reshape_layers.py", line 122, in convert_concat
        layers[node_name] = lambda_layer(layer_input)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 745, in __call__
        inputs = nest.map_structure(_convert_non_tensor, inputs)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\util\nest.py", line 535, in map_structure
        structure[0], [func(*x) for x in entries],
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\util\nest.py", line 535, in <listcomp>
        structure[0], [func(*x) for x in entries],
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 743, in _convert_non_tensor
        return ops.convert_to_tensor(x)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1184, in convert_to_tensor
        return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1242, in convert_to_tensor_v2
        as_ref=False)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1296, in internal_convert_to_tensor
        ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\tensor_conversion_registry.py", line 52, in _default_conversion_function
        return constant_op.constant(value, dtype, name=name)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 227, in constant
        allow_broadcast=True)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 235, in _constant_impl
        t = convert_to_eager_tensor(value, ctx, dtype)
      File "C:\Users\1\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 96, in convert_to_eager_tensor
        return ops.EagerTensor(value, ctx.device_name, dtype)
    ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type NoneType).
    

    There is my code:

    import onnx
    import onnx2keras
    
    model = onnx.load_model("model.onnx")
    
    k_model = onnx2keras.onnx_to_keras(model, ["input_data"])
    

    Thank you.

    opened by TheArheus 3
  •     AVAILABLE_CONVERTERS[node_type]( KeyError: 'Upsample'

    AVAILABLE_CONVERTERS[node_type]( KeyError: 'Upsample'

      File "...\anaconda3\lib\site-packages\onnx2keras\converter.py", line 169, in onnx_to_keras
        AVAILABLE_CONVERTERS[node_type](
    KeyError: 'Upsample'
    

    Can you implement convertor for the Upsample?

    Here you can see the ONNX Model: https://drive.google.com/file/d/1nyWyCkKHT4Pdjli8NPsHMj8Pqpo0_fST/view?usp=sharing

    opened by emanuelcovaci 3
  • AttributeError: Number of inputs is not equal 1 for unsqueeze layer

    AttributeError: Number of inputs is not equal 1 for unsqueeze layer

    image image There are two similar LeNet5 models. But when using onnx2keras, their node information is different. image image As a result, an error is reported when converting the second model. image VGG16 met the same problem, can someone help me with this?
    opened by MT010104 0
  • fix conv1d convertion

    fix conv1d convertion

    In the original implementation, when converting Conv1d layers from ONNX to Keras, bias terms of conv1d was ignored. Fixed this issue by adding back bias terms to the output.

    opened by BoChenYS 0
  • ValueError: `padding` should have 3 elements. Received: [0].

    ValueError: `padding` should have 3 elements. Received: [0].

    i was trying to export .onnx file to .h5 file i don't konw how to solve this error DEBUG:onnx2keras:Output TF Layer -> KerasTensor(type_spec=TensorSpec(shape=(None, 128, 42), dtype=tf.float32, name=None), name='input.1/transpose_1:0', description="created by layer 'input.1'") DEBUG:onnx2keras:###### DEBUG:onnx2keras:... DEBUG:onnx2keras:Converting ONNX operation DEBUG:onnx2keras:type: Relu DEBUG:onnx2keras:node_name: onnx::MaxPool_16 DEBUG:onnx2keras:node_params: {'change_ordering': False, 'name_policy': None} DEBUG:onnx2keras:... DEBUG:onnx2keras:Check if all inputs are available: DEBUG:onnx2keras:Check input 0 (name input.1). DEBUG:onnx2keras:... found all, continue DEBUG:onnx2keras:###### DEBUG:onnx2keras:... DEBUG:onnx2keras:Converting ONNX operation DEBUG:onnx2keras:type: MaxPool DEBUG:onnx2keras:node_name: input.4 DEBUG:onnx2keras:node_params: {'kernel_shape': [2], 'pads': [0, 0], 'strides': [2], 'change_ordering': False, 'name_policy': None} DEBUG:onnx2keras:... DEBUG:onnx2keras:Check if all inputs are available: DEBUG:onnx2keras:Check input 0 (name onnx::MaxPool_16). DEBUG:onnx2keras:... found all, continue WARNING:onnx2keras:maxpool:Unable to use same padding. Add ZeroPadding2D layer to fix shapes. Traceback (most recent call last): File "f:/vscode_workspace/model_output/to_onnx.py", line 101, in onnx_to_h5(input_path, output_path) #将onnx模型转换为h5模型 File "f:/vscode_workspace/model_output/to_onnx.py", line 54, in onnx_to_h5 k_model = onnx_to_keras(onnx_model, ['input']) File "D:\miniconda\envs\d2l\lib\site-packages\onnx2keras\converter.py", line 178, in onnx_to_keras AVAILABLE_CONVERTERS[node_type]( File "D:\miniconda\envs\d2l\lib\site-packages\onnx2keras\pooling_layers.py", line 50, in convert_maxpool padding_layer = keras.layers.ZeroPadding3D( File "D:\miniconda\envs\d2l\lib\site-packages\keras\layers\reshaping\zero_padding3d.py", line 94, in init raise ValueError( ValueError: padding should have 3 elements. Received: [0].

    opened by Eien9 0
  • Not able to convert onnx model

    Not able to convert onnx model

    i was trying to convert onnx model to keras model i was getting error i tried several methods but no use. needed help from community below i am providing the link to the onnx model

    link: https://github.com/zabir-nabil/onnx-face-liveness/raw/main/face_liveness.onnx

    opened by deekshith1352 1
  • Use of bare except clause in `convert_concat`

    Use of bare except clause in `convert_concat`

    In reshape_layers.convert_concat function, there is an except clause that catches every exception (link). What is the type of exception encountered and when does this happen ? This clause should be restricted to expected types of exceptions.

    Also, if an exception is raised, it falls back to using a keras Lambda layer with a custom function with tf.concat. The use of Lambda layers should be avoided. Hence the need to specify the kind of exception that can be caught in order to find a better solution.

    opened by bourcierj 0
Owner
Grigory Malivenko
Machine Learning Engineer
Grigory Malivenko
tf2onnx - Convert TensorFlow, Keras and Tflite models to ONNX.

tf2onnx converts TensorFlow (tf-1.x or tf-2.x), tf.keras and tflite models to ONNX via command line or python api.

Open Neural Network Exchange 1.8k Jan 8, 2023
ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

Ibai Gorordo 18 Nov 6, 2022
ONNX-PackNet-SfM: Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX

Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX

Ibai Gorordo 14 Dec 9, 2022
Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Roxbili 5 Nov 19, 2022
Json2Xml tool will help you convert from json COCO format to VOC xml format in Object Detection Problem.

JSON 2 XML All codes assume running from root directory. Please update the sys path at the beginning of the codes before running. Over View Json2Xml t

Nguyễn Trường Lâu 6 Aug 22, 2022
Txt2Xml tool will help you convert from txt COCO format to VOC xml format in Object Detection Problem.

TXT 2 XML All codes assume running from root directory. Please update the sys path at the beginning of the codes before running. Over View Txt2Xml too

Nguyễn Trường Lâu 4 Nov 24, 2022
ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS.

ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS. It currently supports four examples for you to quickly experience the power of ONNX Runtime Web.

Microsoft 58 Dec 18, 2022
An executor that loads ONNX models and embeds documents using the ONNX runtime.

ONNXEncoder An executor that loads ONNX models and embeds documents using the ONNX runtime. Usage via Docker image (recommended) from jina import Flow

Jina AI 2 Mar 15, 2022
A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or simply to separate onnx files to any size you want.

sne4onnx A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or

Katsuya Hyodo 10 Aug 30, 2022
Simple ONNX operation generator. Simple Operation Generator for ONNX.

sog4onnx Simple ONNX operation generator. Simple Operation Generator for ONNX. https://github.com/PINTO0309/simple-onnx-processing-tools Key concept V

Katsuya Hyodo 6 May 15, 2022
A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for ONNX.

sam4onnx A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for

Katsuya Hyodo 6 May 15, 2022
Simple tool to combine(merge) onnx models. Simple Network Combine Tool for ONNX.

snc4onnx Simple tool to combine(merge) onnx models. Simple Network Combine Tool for ONNX. https://github.com/PINTO0309/simple-onnx-processing-tools 1.

Katsuya Hyodo 8 Oct 13, 2022
Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

scc4onnx Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel

Katsuya Hyodo 16 Dec 22, 2022
Convert Apple NeuralHash model for CSAM Detection to ONNX.

Apple NeuralHash is a perceptual hashing method for images based on neural networks. It can tolerate image resize and compression.

Asuhariet Ygvar 1.5k Dec 31, 2022
Convert onnx models to pytorch.

onnx2torch onnx2torch is an ONNX to PyTorch converter. Our converter: Is easy to use – Convert the ONNX model with the function call convert; Is easy

ENOT 264 Dec 30, 2022
This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras)

Yogi-Optimizer_Keras This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras) The NeurIPS-Paper can be found here: http://papers.nips.c

null 14 Sep 13, 2022
Keras udrl - Keras implementation of Upside Down Reinforcement Learning

keras_udrl Keras implementation of Upside Down Reinforcement Learning This is me

Eder Santana 7 Jan 24, 2022
Example-custom-ml-block-keras - Custom Keras ML block example for Edge Impulse

Custom Keras ML block example for Edge Impulse This repository is an example on

Edge Impulse 8 Nov 2, 2022
Classification models 1D Zoo - Keras and TF.Keras

Classification models 1D Zoo - Keras and TF.Keras This repository contains 1D variants of popular CNN models for classification like ResNets, DenseNet

Roman Solovyev 12 Jan 6, 2023