A code generator from ONNX to PyTorch code

Overview

onnx-pytorch

Build Status

Generating pytorch code from ONNX. Currently support onnx==1.9.0 and torch==1.8.1.

Installation

  • From PyPI
pip install onnx-pytorch
  • From source
git clone https://github.com/fumihwh/onnx-pytorch.git
pip install -r requirements.txt
pip install -e .

Usage

from onnx_pytorch import code_gen
code_gen.gen("/path/to/onnx_model", "/path/to/output_dir")

A model.py file and variables folder will be created under output_dir.

Tutorial

  • Download resnet18 onnx model

wget https://github.com/onnx/models/raw/master/vision/classification/resnet/model/resnet18-v2-7.onnx

  • Use onnx-pytorch to generate pytorch code and variables.
from onnx_pytorch import code_gen
code_gen.gen("resnet18-v2-7.onnx", "./")
  • Test result
import numpy as np
import onnx
import onnxruntime
import torch
torch.set_printoptions(8)

from model import Model

model = Model()
model.eval()
inp = np.random.randn(1, 3, 224, 224).astype(np.float32)
with torch.no_grad():
  torch_outputs = model(torch.from_numpy(inp))

onnx_model = onnx.load("resnet18-v2-7.onnx")
sess_options = onnxruntime.SessionOptions()
session = onnxruntime.InferenceSession(onnx_model.SerializeToString(),
                                       sess_options)
inputs = {"data": inp}
ort_outputs = session.run(None, inputs)

print(
    "Comparison result:",
    np.allclose(torch_outputs.detach().numpy(),
                ort_outputs[0],
                atol=1e-5,
                rtol=1e-5))
Comments
  • latest version of onnx or torch fails pytest

    latest version of onnx or torch fails pytest

    latest version of onnx or torch fails pytest: pip install onnx onnxruntime --upgrade produces Successfully installed onnx-1.10.2 onnxruntime-1.9.0

    which fails the pipeline

    ================================================================================================================================== test session starts ===================================================================================================================================
    platform linux -- Python 3.9.7, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
    rootdir: <me>/Documents/travail/programs/onnx-pytorch
    plugins: dash-2.0.0
    collected 88 items                                                                                                                                                                                                                                                                       
    
    onnx_pytorch/tests/test_base.py .F.................F..................s.................................................                                                                                                                                                           [100%]
    
    ======================================================================================================================================== FAILURES ========================================================================================================================================
    _________________________________________________________________________________________________________________ TestBase.test_conv_batchnorm_maxpool_flatten_add_relu __________________________________________________________________________________________________________________
    
    self = <onnx_pytorch.tests.test_base.TestBase object at 0x7fce8a666880>
    
        def test_conv_batchnorm_maxpool_flatten_add_relu(self):
          reset_model(13)
          nps = [np.random.randn(1, 3, 224, 224).astype(np.float32)]
          inputs = Input(*nps)
          conv_node = Conv(inputs[0],
                           np.random.randn(32, 3, 3, 3).astype(np.float32),
                           np.random.randn(32).astype(np.float32))
          bn_node = BatchNormalization(
              conv_node,
              np.ones(32,).astype(np.float32),
              np.zeros(32,).astype(np.float32),
              np.random.randn(32).astype(np.float32),
              np.abs(np.random.randn(32).astype(np.float32)),
          )
          max_pool_node = MaxPool(bn_node,
                                  kernel_shape=(3, 3),
                                  strides=(2, 2),
                                  pads=(0, 0, 1, 1))
          flatten_node = Flatten(max_pool_node, axis=1)
          add_node = Add(flatten_node, np.random.randn(1).astype(np.float32))
          relu_node = Relu(add_node)
          Output(relu_node)
    >     self._run(list(zip(inputs, nps)))
    
    onnx_pytorch/tests/test_base.py:103: 
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    
    self = <onnx_pytorch.tests.test_base.TestBase object at 0x7fce8a666880>
    inputs_np = [('_t_Input_0', array([[[[ 1.0018734 , -0.62048906,  1.2765806 , ...,  0.25725722,
              -1.1847678 ,  1.8534303 ]...     [-0.86980325, -0.2758593 ,  0.05530448, ...,  0.2182875 ,
               0.33060816,  0.6260562 ]]]], dtype=float32))]
    
        def _run(self, inputs_np):
          inputs_np_dict = {k: v for k, v in inputs_np if k != ""}
          model = onnx.ModelProto()
          model.CopyFrom(omm.model)
          sess_options = onnxruntime.SessionOptions()
          session = onnxruntime.InferenceSession(model.SerializeToString(),
                                                 sess_options)
          ort_outputs = session.run(None, inputs_np_dict)
          model.graph.ClearField("value_info")
          initializers = {i.name: i for i in model.graph.initializer}
          for i in model.graph.input:
            if i.name in initializers:
              continue
            for idx, d in enumerate(i.type.tensor_type.shape.dim):
              if d.dim_param != "":
                d.ClearField("dim_param")
              d.dim_value = inputs_np_dict[i.name].shape[idx]
          try:
            model = SymbolicShapeInference.infer_shapes(model, 2**31 - 1, True, True,
                                                        1)
          except:
            logging.warning("Shape infer by onnxruntime failed.")
          with TemporaryDirectory() as tmpdir:
            clear_op_code_generator()
            model_code_generator = code_gen.get_model_code_generator(
                model,
                output_dir=tmpdir,
                tensor_inplace=True,
                simplify_names=True,
                shape_infer=False)
            model_code_generator.run()
            spec = importlib.util.spec_from_file_location(
                "model", os.path.join(tmpdir, "model.py"))
            mod = importlib.util.module_from_spec(spec)
            spec.loader.exec_module(mod)
            pt_outputs = mod.test_run_model(
                [torch.from_numpy(v) for k, v in inputs_np if k != ""])
            if type(pt_outputs) == torch.Tensor:
              pt_outputs = [pt_outputs.detach().numpy()]
            elif type(pt_outputs) in (list, tuple):
              pt_outputs = [o.detach().numpy() for o in pt_outputs]
            for l, r in zip(ort_outputs, pt_outputs):
    >         assert np.allclose(l, r, atol=1e-4, rtol=1e-4, equal_nan=True)
    E         assert False
    E          +  where False = <function allclose at 0x7fcee3f60550>(array([[1.3416731 , 0.8318468 , 0.6191998 , ..., 1.1701062 , 0.6089205 ,\n        0.57694536]], dtype=float32), array([[10.049213 ,  6.957016 ,  5.667273 , ..., 10.965231 ,  7.2742968,\n         7.0639963]], dtype=float32), atol=0.0001, rtol=0.0001, equal_nan=True)
    E          +    where <function allclose at 0x7fcee3f60550> = np.allclose
    
    onnx_pytorch/tests/test_base.py:67: AssertionError
    ---------------------------------------------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------------------------------------------
    # Autogenerated by onnx-pytorch.
    
    import glob
    import os
    import math
    
    import numpy as np
    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    import torchvision
    
    
    class Model(nn.Module):
      def __init__(self):
        super(Model, self).__init__()
        self._vars = nn.ParameterDict()
        self._regularizer_params = []
        for b in glob.glob(
            os.path.join(os.path.dirname(__file__), "variables", "*.npy")):
          v = torch.from_numpy(np.load(b))
          requires_grad = v.dtype.is_floating_point or v.dtype.is_complex
          self._vars[os.path.basename(b)[:-4]] = nn.Parameter(v, requires_grad=requires_grad)
        self.n_Conv_0 = nn.Conv2d(**{'groups': 1, 'dilation': 1, 'out_channels': 32, 'padding': 0, 'kernel_size': (3, 3), 'stride': 1, 'in_channels': 3, 'bias': True})
        self.n_Conv_0.weight.data = self._vars["t_0"]
        self.n_Conv_0.bias.data = self._vars["t_1"]
        self.n_BatchNormalization_0 = nn.BatchNorm2d(**{'num_features': 32, 'eps': 9.999999747378752e-06, 'momentum': 0.8999999761581421})
        self.n_BatchNormalization_0.weight.data = self._vars["t_2"]
        self.n_BatchNormalization_0.bias.data = self._vars["t_3"]
        self.n_BatchNormalization_0.running_mean.data = self._vars["t_4"]
        self.n_BatchNormalization_0.running_var.data = self._vars["t_5"]
        self.n_MaxPool_0 = nn.MaxPool2d(**{'dilation': 1, 'kernel_size': [3, 3], 'ceil_mode': False, 'stride': [2, 2], 'return_indices': True})
        self.n_Flatten_0 = nn.Flatten(**{'start_dim': 1})
    
      def forward(self, *inputs):
        t_7, = inputs
        t_8 = self.n_Conv_0(t_7)
        t_9 = self.n_BatchNormalization_0(t_8)
        t_9 = F.pad(t_9, [0, 1, 0, 1], value=float('-inf'))
        t_14, t_15 = self.n_MaxPool_0(t_9)
        t_16 = self.n_Flatten_0(t_14)
        t_17 = torch.add(t_16, self._vars["t_6"])
        t_18 = F.relu(t_17)
        return t_18
    
      def compatible_auto_pad(self, input, kernel_spatial_shape, nn_mod, auto_pad=None, **kwargs):
        input_spatial_shape = input.shape[2:]
        d = len(input_spatial_shape)
        strides = nn_mod.stride
        dilations = nn_mod.dilation
        output_spatial_shape = [math.ceil(float(l) / float(r)) for l, r in zip(input.shape[2:], strides)]
        pt_padding = [0] * 2 * d
        pad_shape = [0] * d
        for i in range(d):
          pad_shape[i] = (output_spatial_shape[i] - 1) * strides[i] + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i]
          mean = pad_shape[i] // 2
          if auto_pad == b"SAME_UPPER":
            l, r = pad_shape[i] - mean, mean
          else:
            l, r = mean, pad_shape[i] - mean
          pt_padding.insert(0, r)
          pt_padding.insert(0, l)
        return F.pad(input, pt_padding)
    
    @torch.no_grad()
    def test_run_model(inputs=[torch.from_numpy(np.random.randn(*[1, 3, 224, 224]).astype(np.float32))]):
      model = Model()
      model.eval()
      rs = model(*inputs)
      print(rs)
      return rs
    
    tensor([[10.04921341,  6.95701599,  5.66727304,  ..., 10.96523094,
              7.27429676,  7.06399632]])
    ----------------------------------------------------------------------------------------------------------------------------------- Captured log call ------------------------------------------------------------------------------------------------------------------------------------
    WARNING  root:__init__.py:41 Cannot get default value for dilations of MaxPool.
    WARNING  root:__init__.py:41 Cannot get default value for kernel_shape of MaxPool.
    WARNING  root:__init__.py:41 Cannot get default value for pads of MaxPool.
    WARNING  root:__init__.py:41 Cannot get default value for strides of MaxPool.
    WARNING  root:MaxPool.py:47 MaxPool with asymmetric padding will get incorrect indices.
    ___________________________________________________________________________________________________________________________ TestBase.test_batch_normalization ____________________________________________________________________________________________________________________________
    
    self = <onnx_pytorch.tests.test_base.TestBase object at 0x7fce88ce44c0>
    
        def test_batch_normalization(self):
          reset_model(13)
          nps = [np.random.randn(1, 32, 3, 3).astype(np.float32)]
          inputs = Input(*nps)
          Output(BatchNormalization(
              inputs[0],
              np.ones(32,).astype(np.float32),
              np.zeros(32,).astype(np.float32),
              np.random.randn(32).astype(np.float32),
              np.abs(np.random.randn(32).astype(np.float32)),
          ),
                 output_num=1)
    >     self._run(list(zip(inputs, nps)))
    
    onnx_pytorch/tests/test_base.py:239: 
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    
    self = <onnx_pytorch.tests.test_base.TestBase object at 0x7fce88ce44c0>
    inputs_np = [('_t_Input_0', array([[[[ 6.35267049e-02,  5.02886951e-01, -6.22651100e-01],
             [ 1.44260633e+00,  1.56048670e-...51401734e-01,  5.14413416e-01],
             [-1.90268409e+00, -7.60383308e-02,  2.99409509e-01]]]],
          dtype=float32))]
    
        def _run(self, inputs_np):
          inputs_np_dict = {k: v for k, v in inputs_np if k != ""}
          model = onnx.ModelProto()
          model.CopyFrom(omm.model)
          sess_options = onnxruntime.SessionOptions()
          session = onnxruntime.InferenceSession(model.SerializeToString(),
                                                 sess_options)
          ort_outputs = session.run(None, inputs_np_dict)
          model.graph.ClearField("value_info")
          initializers = {i.name: i for i in model.graph.initializer}
          for i in model.graph.input:
            if i.name in initializers:
              continue
            for idx, d in enumerate(i.type.tensor_type.shape.dim):
              if d.dim_param != "":
                d.ClearField("dim_param")
              d.dim_value = inputs_np_dict[i.name].shape[idx]
          try:
            model = SymbolicShapeInference.infer_shapes(model, 2**31 - 1, True, True,
                                                        1)
          except:
            logging.warning("Shape infer by onnxruntime failed.")
          with TemporaryDirectory() as tmpdir:
            clear_op_code_generator()
            model_code_generator = code_gen.get_model_code_generator(
                model,
                output_dir=tmpdir,
                tensor_inplace=True,
                simplify_names=True,
                shape_infer=False)
            model_code_generator.run()
            spec = importlib.util.spec_from_file_location(
                "model", os.path.join(tmpdir, "model.py"))
            mod = importlib.util.module_from_spec(spec)
            spec.loader.exec_module(mod)
            pt_outputs = mod.test_run_model(
                [torch.from_numpy(v) for k, v in inputs_np if k != ""])
            if type(pt_outputs) == torch.Tensor:
              pt_outputs = [pt_outputs.detach().numpy()]
            elif type(pt_outputs) in (list, tuple):
              pt_outputs = [o.detach().numpy() for o in pt_outputs]
            for l, r in zip(ort_outputs, pt_outputs):
    >         assert np.allclose(l, r, atol=1e-4, rtol=1e-4, equal_nan=True)
    E         assert False
    E          +  where False = <function allclose at 0x7fcee3f60550>(array([[[[-0.13030988,  0.44412366, -1.0274405 ],\n         [ 1.6727427 , -0.00934371, -0.14003941],\n         [ 1.48930...,\n         [ 0.7121257 , -0.5435372 ,  0.5330533 ],\n         [-1.9084809 , -0.06336791,  0.31587568]]]], dtype=float32), array([[[[ 1.03302915e-02,  4.43110734e-01, -6.65571392e-01],\n         [ 1.36875701e+00,  1.01466656e-01,  3.00002005e...8.79306126e+00,  1.40610695e+01],\n         [ 2.11407280e+00,  1.11426420e+01,  1.29983692e+01]]]],\n      dtype=float32), atol=0.0001, rtol=0.0001, equal_nan=True)
    E          +    where <function allclose at 0x7fcee3f60550> = np.allclose
    
    onnx_pytorch/tests/test_base.py:67: AssertionError
    ---------------------------------------------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------------------------------------------
    # Autogenerated by onnx-pytorch.
    
    import glob
    import os
    import math
    
    import numpy as np
    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    import torchvision
    
    
    class Model(nn.Module):
      def __init__(self):
        super(Model, self).__init__()
        self._vars = nn.ParameterDict()
        self._regularizer_params = []
        for b in glob.glob(
            os.path.join(os.path.dirname(__file__), "variables", "*.npy")):
          v = torch.from_numpy(np.load(b))
          requires_grad = v.dtype.is_floating_point or v.dtype.is_complex
          self._vars[os.path.basename(b)[:-4]] = nn.Parameter(v, requires_grad=requires_grad)
        self.n_BatchNormalization_0 = nn.BatchNorm2d(**{'num_features': 32, 'eps': 9.999999747378752e-06, 'momentum': 0.8999999761581421})
        self.n_BatchNormalization_0.weight.data = self._vars["t_0"]
        self.n_BatchNormalization_0.bias.data = self._vars["t_1"]
        self.n_BatchNormalization_0.running_mean.data = self._vars["t_2"]
        self.n_BatchNormalization_0.running_var.data = self._vars["t_3"]
    
      def forward(self, *inputs):
        t_4, = inputs
        t_5 = self.n_BatchNormalization_0(t_4)
        return t_5
    
      
    @torch.no_grad()
    def test_run_model(inputs=[torch.from_numpy(np.random.randn(*[1, 32, 3, 3]).astype(np.float32))]):
      model = Model()
      model.eval()
      rs = model(*inputs)
      print(rs)
      return rs
    
    tensor([[[[ 1.03302915e-02,  4.43110734e-01, -6.65571392e-01],
              [ 1.36875701e+00,  1.01466656e-01,  3.00002005e-03],
              [ 1.23055291e+00, -6.36751056e-01, -8.78339052e-01]],
    
             [[-4.64856595e-01,  1.01388752e+00,  2.45039845e+00],
              [-1.51369238e+00, -7.56639481e-01, -1.26973033e+00],
              [ 3.04206324e+00, -1.07024908e+00,  1.22984998e-01]],
    
             [[-2.69752383e-01, -9.64242399e-01, -2.14787436e+00],
              [-3.66215348e-01, -7.90006399e-01, -1.19138491e+00],
              [-6.34383440e-01,  4.39469069e-01, -1.50392938e+00]],
    
             [[ 5.44885218e-01,  1.98177516e+00,  2.14701653e+00],
              [ 2.57987189e+00,  6.98854351e+00,  5.21536064e+00],
              [-1.14435458e+00,  1.33780324e+00,  3.80742407e+00]],
    
             [[-1.26968300e+00, -4.35954601e-01,  5.31747639e-01],
              [-2.33643723e+00, -2.31319714e+00, -1.69136405e+00],
              [-1.01814747e+00, -1.30057871e+00,  1.37861446e-01]],
    
             [[-7.35616326e-01, -1.18806839e+00, -1.10327315e+00],
              [-1.21497869e+00,  2.44642749e-01, -1.08295512e+00],
              [-7.17091501e-01, -2.20478797e+00, -1.50086403e+00]],
    
             [[-3.56589526e-01, -1.32543945e+00, -3.12406365e-02],
              [-7.59021521e-01,  8.00770998e-01, -1.86119422e-01],
              [-2.47674465e-01,  3.34041089e-01,  4.68768179e-01]],
    
             [[-3.02949500e+00, -9.34190691e-01, -6.01976514e-01],
              [-1.39591777e+00,  9.02901888e-01, -1.70761660e-01],
              [-7.49238193e-01, -8.39863300e-01, -1.61441386e+00]],
    
             [[ 5.27461350e-01, -1.29779911e+00, -1.84558618e+00],
              [-1.37622201e+00, -2.75002476e-02, -4.80182886e-01],
              [-1.48854208e+00, -2.23460600e-01, -1.37674761e+00]],
    
             [[ 8.06057811e-01,  8.74002814e-01, -1.36947542e-01],
              [ 1.77069342e+00,  1.01755619e+00,  3.84808660e-01],
              [ 6.74725831e-01,  3.76408148e+00,  2.22828791e-01]],
    
             [[ 3.71400404e+00,  2.69624019e+00,  1.77703583e+00],
              [ 2.33299780e+00,  2.48477370e-01,  3.29037476e+00],
              [ 1.03505504e+00,  2.66409278e+00,  3.81201744e+00]],
    
             [[ 1.02166690e-01, -1.42813325e-01, -4.73593771e-01],
              [-2.43843883e-01,  4.17272627e-01,  8.99561644e-01],
              [-7.05574870e-01,  2.67669708e-01,  5.22894859e-01]],
    
             [[-1.17352533e+00, -5.71887255e-01, -3.19737315e-01],
              [-1.18356705e+00, -2.85988569e+00, -7.28449404e-01],
              [-1.39273572e+00, -1.43941092e+00, -4.75017697e-01]],
    
             [[-9.16496933e-01, -1.37783527e+00,  1.75405681e+00],
              [-2.10685277e+00, -1.30036724e+00,  2.50304151e+00],
              [ 3.88478422e+00,  8.30973566e-01,  3.44308519e+00]],
    
             [[-1.08552837e+00, -1.35483885e+00,  9.10718501e-01],
              [ 7.22618103e-01, -3.82872492e-01,  3.09645385e-01],
              [ 1.25192356e+00,  1.48433483e+00, -7.20467627e-01]],
    
             [[ 2.90476012e+00,  2.38905120e+00,  3.20962930e+00],
              [ 4.72063154e-01,  1.03854692e+00,  1.42332995e+00],
              [-2.65931457e-01,  2.61525941e+00,  1.36843193e+00]],
    
             [[ 2.29905200e+00,  7.33413887e+00, -2.16392994e+01],
              [-9.26441479e+00, -4.63282776e+00,  8.38395882e+00],
              [-6.14768124e+00, -1.39623775e+01, -5.33043909e+00]],
    
             [[-1.18203115e+00,  7.83545434e-01, -1.33013463e+00],
              [ 1.55748868e+00,  2.99707323e-01, -1.74411178e-01],
              [-3.15904379e-01, -1.27137268e+00,  2.87169278e-01]],
    
             [[ 2.82064867e+00, -3.11068088e-01, -7.12420881e-01],
              [ 1.99217871e-01,  8.75358164e-01,  5.74787557e-01],
              [ 1.21458745e+00,  1.32562840e+00,  1.46251321e-01]],
    
             [[-2.08626246e+00, -1.01060474e+00, -1.84688258e+00],
              [-1.30853727e-01, -7.70996749e-01,  7.53721535e-01],
              [ 1.19904697e+00, -1.62641481e-01, -8.22388411e-01]],
    
             [[ 1.33589315e+00,  3.14021409e-01,  2.48438573e+00],
              [-2.21844530e+00,  5.82929230e+00,  2.27573776e+00],
              [ 5.50253439e+00,  2.19331694e+00,  4.72958851e+00]],
    
             [[-1.88447189e+00, -9.36176181e-01, -1.94018316e+00],
              [-1.43561804e+00, -4.47861242e+00, -3.19850969e+00],
              [-9.75790977e-01, -2.53019547e+00, -2.31218606e-01]],
    
             [[ 1.56031847e+00, -8.49840164e-01,  2.18206739e+00],
              [ 1.86757004e+00, -9.00376320e-01, -3.14888433e-02],
              [-2.60793537e-01,  3.81440073e-01,  1.87343729e+00]],
    
             [[-2.49012423e+00, -1.80255661e+01, -1.39246368e+01],
              [-7.12090111e+00, -1.14031210e+01, -3.02313328e+00],
              [-5.08311844e+00, -7.04758024e+00, -8.73173904e+00]],
    
             [[-3.17438930e-01, -5.40359974e-01, -8.29769790e-01],
              [-2.39079952e+00, -7.72985220e-01, -1.00527453e+00],
              [-4.49523091e-01, -1.43823814e+00, -8.15485835e-01]],
    
             [[-1.75956070e+00, -3.46495295e+00, -5.70724130e-01],
              [-1.35396278e+00, -1.52985775e+00, -9.15392518e-01],
              [ 1.32145539e-01, -1.15701056e+00, -3.28669786e+00]],
    
             [[ 9.83868241e-01,  1.86329472e+00,  3.16185784e+00],
              [ 3.53541660e+00,  3.46067637e-01, -4.36942726e-01],
              [ 8.96343887e-01,  1.15589023e+00,  1.66808695e-01]],
    
             [[ 1.45385325e+00, -2.57331681e+00,  2.47062397e+00],
              [ 5.09636497e+00, -4.55582333e+00,  6.47839642e+00],
              [ 6.10593510e+00,  8.07678998e-01,  2.03531766e+00]],
    
             [[-7.87889004e+00,  2.15410185e+00, -1.72434068e+00],
              [-4.13584518e+00, -5.07564878e+00, -7.04525948e+00],
              [-4.00902462e+00,  6.43981886e+00,  4.90088892e+00]],
    
             [[-8.97298872e-01, -6.58248663e-01,  3.97185832e-01],
              [ 1.26078165e+00, -5.88805914e-01, -1.58723903e+00],
              [ 1.83342293e-01,  5.42823195e-01, -8.95587146e-01]],
    
             [[-2.58091998e+00,  1.56836367e+00,  4.73235160e-01],
              [ 6.95867360e-01,  3.10397220e+00,  8.56488526e-01],
              [-5.79270065e-01, -1.23413563e+00,  2.25809479e+00]],
    
             [[ 1.47533607e+01,  5.50610733e+00,  1.87684441e+01],
              [ 1.49373131e+01,  8.79306126e+00,  1.40610695e+01],
              [ 2.11407280e+00,  1.11426420e+01,  1.29983692e+01]]]])
    ==================================================================================================================================== warnings summary ====================================================================================================================================
    ../../../../anaconda3/envs/onnx-pytorch/lib/python3.9/site-packages/onnx/mapping.py:27
      <me>/anaconda3/envs/onnx-pytorch/lib/python3.9/site-packages/onnx/mapping.py:27: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. 
      Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
        int(TensorProto.STRING): np.dtype(np.object)
    
    onnx_pytorch/tests/test_base.py: 186 warnings
      <me>/anaconda3/envs/onnx-pytorch/lib/python3.9/site-packages/onnx/numpy_helper.py:93: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. 
      Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
        if arr.dtype == np.object:
    
    onnx_pytorch/tests/test_base.py::TestBase::test_conv_batchnorm_maxpool_flatten_add_relu
      <me>/anaconda3/envs/onnx-pytorch/lib/python3.9/site-packages/onnx/helper.py:365: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
        is_iterable = isinstance(value, collections.Iterable)
    
    onnx_pytorch/tests/test_base.py::TestBase::test_and
    onnx_pytorch/tests/test_base.py::TestBase::test_and
      /tmp/tmpdcjl7rk5/model.py:33: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
      Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
    
    onnx_pytorch/tests/test_base.py::TestBase::test_non_zero
      /tmp/tmpxjta2pa8/model.py:33: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
      Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
    
    onnx_pytorch/tests/test_base.py::TestBase::test_resize_downsample_sizes_linear_pytorch_half_pixel
    onnx_pytorch/tests/test_base.py::TestBase::test_resize_pt_bilinear
      <me>/anaconda3/envs/onnx-pytorch/lib/python3.9/site-packages/torch/nn/functional.py:3454: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
        warnings.warn(
    
    -- Docs: https://docs.pytest.org/en/stable/warnings.html
    ================================================================================================================================ short test summary info =================================================================================================================================
    FAILED onnx_pytorch/tests/test_base.py::TestBase::test_conv_batchnorm_maxpool_flatten_add_relu - assert False
    FAILED onnx_pytorch/tests/test_base.py::TestBase::test_batch_normalization - assert False
    ================================================================================================================= 2 failed, 85 passed, 1 skipped, 193 warnings in 1.50s ==================================================================================================================
    
    opened by helion-du-mas-des-bourboux-thales 3
  • Function `code_gen.gen` failed with layer `LayerNormalization`. However, `BatchNormalization` succeeds.

    Function `code_gen.gen` failed with layer `LayerNormalization`. However, `BatchNormalization` succeeds.

    This is ipython code (at colab) which makes an error.

    Code

    !pip install tensorflow==2.6.4 onnx==1.12.0 onnx-pytorch git+https://github.com/onnx/tensorflow-onnx
    
    import tensorflow as tf
    import onnx
    
    from onnx_pytorch import code_gen
    
    with tf.device("/cpu:0"):
        tf_model = tf.keras.Sequential()
        tf_model.add(tf.keras.layers.Input((123,)))
        tf_model.add(tf.keras.layers.LayerNormalization())
        tf.keras.models.save_model(
            tf_model,
            "model.tf",
            overwrite=True,
            include_optimizer=False,
            save_format=None,
            signatures=None,
            options=None,
            save_traces=True
        )
    !python -m tf2onnx.convert --saved-model model.tf --output model.onnx --opset 11 --verbose
    code_gen.gen("model.onnx", "./")
    

    Error Message

    ---------------------------------------------------------------------------
    NotImplementedError                       Traceback (most recent call last)
    [<ipython-input-8-b7c6a94144c8>](https://localhost:8080/#) in <module>()
         21     )
         22 get_ipython().system('python -m tf2onnx.convert --saved-model model.tf --output model.onnx --opset 11 --verbose')
    ---> 23 code_gen.gen("model.onnx", "./")
    
    1 frames
    [/usr/local/lib/python3.7/dist-packages/onnx_pytorch/code_gen.py](https://localhost:8080/#) in gen(onnx_model, output_dir, overwrite, tensor_inplace, simplify_names, continue_on_error, embedding_conf_file, shape_infer)
        289       onnx_model, output_dir, overwrite, tensor_inplace, simplify_names,
        290       continue_on_error, embedding_conf_file, shape_infer)
    --> 291   model_code_generator.run()
        292 
        293 
    
    [/usr/local/lib/python3.7/dist-packages/onnx_pytorch/code_gen.py](https://localhost:8080/#) in run(self)
        245         else:
        246           raise NotImplementedError(
    --> 247               f"OpCodeGenerator is unimplemented for {n.op_type}.")
        248       else:
        249         try:
    
    NotImplementedError: OpCodeGenerator is unimplemented for ReduceSumSquare.
    
    opened by klae01 2
  • latest onnxruntime fails test

    latest onnxruntime fails test

    onnxruntime==1.9.0

    (onnx-pytorch) <me>:<me>/onnx-pytorch$ pytest onnx_pytorch/tests/test_base.py 
    =============================================================================================== test session starts ===============================================================================================
    platform linux -- Python 3.9.7, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
    rootdir: <me>//onnx-pytorch
    plugins: dash-2.0.0
    collected 88 items                                                                                                                                                                                                
    
    onnx_pytorch/tests/test_base.py .F.................F..................s...........................s.....................                                                                                    [100%]
    
    ==================================================================================================== FAILURES =====================================================================================================
    ______________________________________________________________________________ TestBase.test_conv_batchnorm_maxpool_flatten_add_relu ______________________________________________________________________________
    
    self = <onnx_pytorch.tests.test_base.TestBase object at 0x7f7aa0349d90>
    
        def test_conv_batchnorm_maxpool_flatten_add_relu(self):
          reset_model(13)
          nps = [np.random.randn(1, 3, 224, 224).astype(np.float32)]
          inputs = Input(*nps)
          conv_node = Conv(inputs[0],
                           np.random.randn(32, 3, 3, 3).astype(np.float32),
                           np.random.randn(32).astype(np.float32))
          bn_node = BatchNormalization(
              conv_node,
              np.ones(32,).astype(np.float32),
              np.zeros(32,).astype(np.float32),
              np.random.randn(32).astype(np.float32),
              np.abs(np.random.randn(32).astype(np.float32)),
          )
          max_pool_node = MaxPool(bn_node,
                                  kernel_shape=(3, 3),
                                  strides=(2, 2),
                                  pads=(0, 0, 1, 1))
          flatten_node = Flatten(max_pool_node, axis=1)
          add_node = Add(flatten_node, np.random.randn(1).astype(np.float32))
          relu_node = Relu(add_node)
          Output(relu_node)
    >     self._run(list(zip(inputs, nps)))
    
    onnx_pytorch/tests/test_base.py:103: 
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    self = <onnx_pytorch.tests.test_base.TestBase object at 0x7f7aa0349d90>
    inputs_np = [('_t_Input_0', array([[[[ 0.08681966,  0.31802994, -0.46221298, ...,  0.86617213,
              -0.37778926, -0.6164783 ]...     [-0.22646298, -0.44820276, -0.9840031 , ...,  0.5185814 ,
               1.3545119 , -0.98803467]]]], dtype=float32))]
    
        def _run(self, inputs_np):
          inputs_np_dict = {k: v for k, v in inputs_np if k != ""}
          model = onnx.ModelProto()
          model.CopyFrom(omm.model)
          sess_options = onnxruntime.SessionOptions()
          session = onnxruntime.InferenceSession(model.SerializeToString(),
                                                 sess_options)
          ort_outputs = session.run(None, inputs_np_dict)
          model.graph.ClearField("value_info")
          initializers = {i.name: i for i in model.graph.initializer}
          for i in model.graph.input:
            if i.name in initializers:
              continue
            for idx, d in enumerate(i.type.tensor_type.shape.dim):
              if d.dim_param != "":
                d.ClearField("dim_param")
              d.dim_value = inputs_np_dict[i.name].shape[idx]
          try:
            model = SymbolicShapeInference.infer_shapes(model, 2**31 - 1, True, True,
                                                        1)
          except:
            logging.warning("Shape infer by onnxruntime failed.")
          with TemporaryDirectory() as tmpdir:
            clear_op_code_generator()
            model_code_generator = code_gen.get_model_code_generator(
                model,
                output_dir=tmpdir,
                tensor_inplace=True,
                simplify_names=True,
                shape_infer=False)
            model_code_generator.run()
            spec = importlib.util.spec_from_file_location(
                "model", os.path.join(tmpdir, "model.py"))
            mod = importlib.util.module_from_spec(spec)
            spec.loader.exec_module(mod)
            pt_outputs = mod.test_run_model(
                [torch.from_numpy(v) for k, v in inputs_np if k != ""])
            if type(pt_outputs) == torch.Tensor:
              pt_outputs = [pt_outputs.detach().numpy()]
            elif type(pt_outputs) in (list, tuple):
              pt_outputs = [o.detach().numpy() for o in pt_outputs]
            for l, r in zip(ort_outputs, pt_outputs):
    >         assert np.allclose(l, r, atol=1e-4, rtol=1e-4, equal_nan=True)
    E         assert False
    E          +  where False = <function allclose at 0x7f7b043f61f0>(array([[1.2242965 , 0.41702545, 0.28294265, ..., 0.12723899, 0.12723899,\n        0.        ]], dtype=float32), array([[5.1290994, 2.8178134, 2.4339228, ..., 7.237103 , 7.237103 ,\n        0.       ]], dtype=float32), atol=0.0001, rtol=0.0001, equal_nan=True)
    E          +    where <function allclose at 0x7f7b043f61f0> = np.allclose
    
    onnx_pytorch/tests/test_base.py:67: AssertionError
    ---------------------------------------------------------------------------------------------- Captured stdout call -----------------------------------------------------------------------------------------------
    # Autogenerated by onnx-pytorch.
    
    import glob
    import os
    import math
    
    import numpy as np
    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    import torchvision
    
    
    class Model(nn.Module):
      def __init__(self):
        super(Model, self).__init__()
        self._vars = nn.ParameterDict()
        self._regularizer_params = []
        for b in glob.glob(
            os.path.join(os.path.dirname(__file__), "variables", "*.npy")):
          v = torch.from_numpy(np.load(b))
          requires_grad = v.dtype.is_floating_point or v.dtype.is_complex
          self._vars[os.path.basename(b)[:-4]] = nn.Parameter(v, requires_grad=requires_grad)
        self.n_Conv_0 = nn.Conv2d(**{'groups': 1, 'dilation': 1, 'out_channels': 32, 'padding': 0, 'kernel_size': (3, 3), 'stride': 1, 'in_channels': 3, 'bias': True})
        self.n_Conv_0.weight.data = self._vars["t_0"]
        self.n_Conv_0.bias.data = self._vars["t_1"]
        self.n_BatchNormalization_0 = nn.BatchNorm2d(**{'num_features': 32, 'eps': 9.999999747378752e-06, 'momentum': 0.8999999761581421})
        self.n_BatchNormalization_0.weight.data = self._vars["t_2"]
        self.n_BatchNormalization_0.bias.data = self._vars["t_3"]
        self.n_BatchNormalization_0.running_mean.data = self._vars["t_4"]
        self.n_BatchNormalization_0.running_var.data = self._vars["t_5"]
        self.n_MaxPool_0 = nn.MaxPool2d(**{'dilation': 1, 'kernel_size': [3, 3], 'ceil_mode': False, 'stride': [2, 2], 'return_indices': True})
        self.n_Flatten_0 = nn.Flatten(**{'start_dim': 1})
    
      def forward(self, *inputs):
        t_7, = inputs
        t_8 = self.n_Conv_0(t_7)
        t_9 = self.n_BatchNormalization_0(t_8)
        t_9 = F.pad(t_9, [0, 1, 0, 1], value=float('-inf'))
        t_14, t_15 = self.n_MaxPool_0(t_9)
        t_16 = self.n_Flatten_0(t_14)
        t_17 = torch.add(t_16, self._vars["t_6"])
        t_18 = F.relu(t_17)
        return t_18
    
      def compatible_auto_pad(self, input, kernel_spatial_shape, nn_mod, auto_pad=None, **kwargs):
        input_spatial_shape = input.shape[2:]
        d = len(input_spatial_shape)
        strides = nn_mod.stride
        dilations = nn_mod.dilation
        output_spatial_shape = [math.ceil(float(l) / float(r)) for l, r in zip(input.shape[2:], strides)]
        pt_padding = [0] * 2 * d
        pad_shape = [0] * d
        for i in range(d):
          pad_shape[i] = (output_spatial_shape[i] - 1) * strides[i] + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i]
          mean = pad_shape[i] // 2
          if auto_pad == b"SAME_UPPER":
            l, r = pad_shape[i] - mean, mean
          else:
            l, r = mean, pad_shape[i] - mean
          pt_padding.insert(0, r)
          pt_padding.insert(0, l)
        return F.pad(input, pt_padding)
    
    @torch.no_grad()
    def test_run_model(inputs=[torch.from_numpy(np.random.randn(*[1, 3, 224, 224]).astype(np.float32))]):
      model = Model()
      model.eval()
      rs = model(*inputs)
      print(rs)
      return rs
    
    tensor([[5.12909937, 2.81781340, 2.43392277,  ..., 7.23710299, 7.23710299,
             0.00000000]])
    ------------------------------------------------------------------------------------------------ Captured log call ------------------------------------------------------------------------------------------------
    WARNING  root:__init__.py:41 Cannot get default value for dilations of MaxPool.
    WARNING  root:__init__.py:41 Cannot get default value for kernel_shape of MaxPool.
    WARNING  root:__init__.py:41 Cannot get default value for pads of MaxPool.
    WARNING  root:__init__.py:41 Cannot get default value for strides of MaxPool.
    WARNING  root:MaxPool.py:47 MaxPool with asymmetric padding will get incorrect indices.
    ________________________________________________________________________________________ TestBase.test_batch_normalization ________________________________________________________________________________________
    
    self = <onnx_pytorch.tests.test_base.TestBase object at 0x7f7a9eacfa00>
    
        def test_batch_normalization(self):
          reset_model(13)
          nps = [np.random.randn(1, 32, 3, 3).astype(np.float32)]
          inputs = Input(*nps)
          Output(BatchNormalization(
              inputs[0],
              np.ones(32,).astype(np.float32),
              np.zeros(32,).astype(np.float32),
              np.random.randn(32).astype(np.float32),
              np.abs(np.random.randn(32).astype(np.float32)),
          ),
                 output_num=1)
    >     self._run(list(zip(inputs, nps)))
    
    onnx_pytorch/tests/test_base.py:239: 
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    
    self = <onnx_pytorch.tests.test_base.TestBase object at 0x7f7a9eacfa00>
    inputs_np = [('_t_Input_0', array([[[[ 0.7745172 , -1.4926829 , -1.6556902 ],
             [-0.7622266 ,  0.04088752,  0.83572936],
      ...         [ 0.5896988 , -0.8963601 ,  0.9315137 ],
             [-1.5789044 , -0.9300383 , -0.8664075 ]]]], dtype=float32))]
    
        def _run(self, inputs_np):
          inputs_np_dict = {k: v for k, v in inputs_np if k != ""}
          model = onnx.ModelProto()
          model.CopyFrom(omm.model)
          sess_options = onnxruntime.SessionOptions()
          session = onnxruntime.InferenceSession(model.SerializeToString(),
                                                 sess_options)
          ort_outputs = session.run(None, inputs_np_dict)
          model.graph.ClearField("value_info")
          initializers = {i.name: i for i in model.graph.initializer}
          for i in model.graph.input:
            if i.name in initializers:
              continue
            for idx, d in enumerate(i.type.tensor_type.shape.dim):
              if d.dim_param != "":
                d.ClearField("dim_param")
              d.dim_value = inputs_np_dict[i.name].shape[idx]
          try:
            model = SymbolicShapeInference.infer_shapes(model, 2**31 - 1, True, True,
                                                        1)
          except:
            logging.warning("Shape infer by onnxruntime failed.")
          with TemporaryDirectory() as tmpdir:
            clear_op_code_generator()
            model_code_generator = code_gen.get_model_code_generator(
                model,
                output_dir=tmpdir,
                tensor_inplace=True,
                simplify_names=True,
                shape_infer=False)
            model_code_generator.run()
            spec = importlib.util.spec_from_file_location(
                "model", os.path.join(tmpdir, "model.py"))
            mod = importlib.util.module_from_spec(spec)
            spec.loader.exec_module(mod)
            pt_outputs = mod.test_run_model(
                [torch.from_numpy(v) for k, v in inputs_np if k != ""])
            if type(pt_outputs) == torch.Tensor:
              pt_outputs = [pt_outputs.detach().numpy()]
            elif type(pt_outputs) in (list, tuple):
              pt_outputs = [o.detach().numpy() for o in pt_outputs]
            for l, r in zip(ort_outputs, pt_outputs):
    >         assert np.allclose(l, r, atol=1e-4, rtol=1e-4, equal_nan=True)
    E         assert False
    E          +  where False = <function allclose at 0x7f7b043f61f0>(array([[[[ 9.91475940e-01, -1.39311564e+00, -1.56456316e+00],\n         [-6.24837637e-01,  2.19860300e-01,  1.05585766e...7.59569287e-01,  1.25005341e+00],\n         [-1.50998020e+00, -7.96596169e-01, -7.26638436e-01]]]],\n      dtype=float32), array([[[[ 2.11514905e-02, -1.92307127e+00, -2.06285715e+00],\n         [-1.29667318e+00, -6.07967854e-01,  7.36436024e...8.19936633e-01,  1.26697469e+00],\n         [-1.59920776e+00, -8.58387530e-01, -7.85739303e-01]]]],\n      dtype=float32), atol=0.0001, rtol=0.0001, equal_nan=True)
    E          +    where <function allclose at 0x7f7b043f61f0> = np.allclose
    
    onnx_pytorch/tests/test_base.py:67: AssertionError
    ---------------------------------------------------------------------------------------------- Captured stdout call -----------------------------------------------------------------------------------------------
    # Autogenerated by onnx-pytorch.
    
    import glob
    import os
    import math
    
    import numpy as np
    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    import torchvision
    
    
    class Model(nn.Module):
      def __init__(self):
        super(Model, self).__init__()
        self._vars = nn.ParameterDict()
        self._regularizer_params = []
        for b in glob.glob(
            os.path.join(os.path.dirname(__file__), "variables", "*.npy")):
          v = torch.from_numpy(np.load(b))
          requires_grad = v.dtype.is_floating_point or v.dtype.is_complex
          self._vars[os.path.basename(b)[:-4]] = nn.Parameter(v, requires_grad=requires_grad)
        self.n_BatchNormalization_0 = nn.BatchNorm2d(**{'num_features': 32, 'eps': 9.999999747378752e-06, 'momentum': 0.8999999761581421})
        self.n_BatchNormalization_0.weight.data = self._vars["t_0"]
        self.n_BatchNormalization_0.bias.data = self._vars["t_1"]
        self.n_BatchNormalization_0.running_mean.data = self._vars["t_2"]
        self.n_BatchNormalization_0.running_var.data = self._vars["t_3"]
    
      def forward(self, *inputs):
        t_4, = inputs
        t_5 = self.n_BatchNormalization_0(t_4)
        return t_5
    
      
    @torch.no_grad()
    def test_run_model(inputs=[torch.from_numpy(np.random.randn(*[1, 32, 3, 3]).astype(np.float32))]):
      model = Model()
      model.eval()
      rs = model(*inputs)
      print(rs)
      return rs
    
    tensor([[[[ 2.11514905e-02, -1.92307127e+00, -2.06285715e+00],
              [-1.29667318e+00, -6.07967854e-01,  7.36436024e-02],
              [-1.24425519e+00, -4.32142057e-03, -4.06830050e-02]],
    
             [[ 4.27835196e-01, -4.02293563e-01,  1.25209391e+00],
              [-1.35146415e+00, -2.52955347e-01,  1.47779858e+00],
              [-6.49659276e-01,  4.79720533e-01,  2.22885060e+00]],
    
             [[-2.09176064e+00, -1.05400944e+00, -2.06602669e+00],
              [-1.94747806e+00, -2.88019228e+00, -2.62886310e+00],
              [-3.44989538e+00, -2.75009131e+00, -2.39562416e+00]],
    
             [[ 1.11013091e+00,  1.28344691e+00, -6.32941604e-01],
              [ 7.57854998e-01, -2.10156515e-01,  1.47328424e+00],
              [-2.59426326e-01, -2.84430325e-01,  9.00919676e-01]],
    
             [[ 4.08791155e-01,  2.89755702e-01,  6.62197396e-02],
              [-1.76871634e+00, -5.03794849e-01, -4.27903265e-01],
              [ 9.95307684e-01, -4.92222719e-02, -1.14720094e+00]],
    
             [[-1.45369780e+00,  2.33676344e-01, -1.03255248e+00],
              [ 1.32926130e+00,  2.23724812e-01, -2.06382227e+00],
              [-7.27365375e-01, -3.29207569e-01, -1.84505939e+00]],
    
             [[-7.30695367e-01, -9.48697507e-01,  1.02768219e+00],
              [-3.11210537e+00, -2.19822788e+00,  1.94993824e-01],
              [-5.17953396e-01,  9.80266273e-01,  1.58678629e-02]],
    
             [[-5.50329685e-01, -2.20515108e+00,  5.57632744e-01],
              [-4.76857811e-01,  1.53507262e-01, -1.43097568e+00],
              [ 4.82103467e-01, -1.68012989e+00,  3.24517749e-02]],
    
             [[-5.33442855e-01,  5.51209152e-01,  9.62817371e-01],
              [ 2.40877175e+00,  1.32837451e+00,  1.65606558e+00],
              [-4.13032651e-01,  3.72783518e+00,  3.40976954e-01]],
    
             [[ 6.73895895e-01, -2.66826779e-01,  2.70163131e+00],
              [ 1.51779735e+00,  1.03770292e+00,  3.58062625e-01],
              [ 3.07913351e+00,  1.82803762e+00,  1.80789387e+00]],
    
             [[-5.71182489e-01, -9.17714715e-01, -1.13700569e+00],
              [-1.86594054e-01, -3.26027721e-01, -7.83864677e-01],
              [-8.37005913e-01, -1.44201532e-01, -1.28018081e+00]],
    
             [[-2.11968374e+00,  4.36148047e-01, -2.25281045e-01],
              [-2.65030837e+00, -2.46051192e+00, -7.95132637e-01],
              [-2.29407355e-01, -2.05399799e+00, -3.97852802e+00]],
    
             [[ 1.99362409e+00, -2.22769213e+00,  3.03191710e+00],
              [ 6.41038036e+00,  7.57672191e-01,  2.30211586e-01],
              [ 4.41129446e+00,  5.71550274e+00,  2.88953924e+00]],
    
             [[-1.67502999e+00,  4.71590012e-01,  4.20928180e-01],
              [ 1.42629158e+00,  2.22070456e+00, -2.48521614e+00],
              [-2.90164924e+00, -1.70486748e+00,  3.05718213e-01]],
    
             [[ 1.31291842e+00,  1.51544333e+00,  9.34356451e-01],
              [ 2.45068908e+00,  9.35024202e-01,  1.16957915e+00],
              [ 1.73736286e+00,  1.44560516e+00,  1.79951024e+00]],
    
             [[-1.78257480e-01, -1.50668001e+00, -3.93693089e-01],
              [ 9.00940716e-01,  1.75067687e+00,  1.56921744e-01],
              [-1.68945998e-01, -7.10348845e-01,  2.69243687e-01]],
    
             [[-1.44925761e+00, -8.86168003e-01, -2.19026709e+00],
              [-5.69859803e-01,  6.73547387e-01, -1.53828010e-01],
              [-3.62083554e+00, -1.68905407e-02, -1.03936875e+00]],
    
             [[-2.79535174e+00, -3.87425613e+00,  4.66894388e+00],
              [-3.84637070e+00, -1.71726680e+00, -3.25723600e+00],
              [-6.84032822e+00, -1.06125496e-01,  2.27101946e+00]],
    
             [[ 9.65043604e-01, -3.17505288e+00,  1.14182040e-01],
              [-2.67569017e+00,  1.84636426e+00, -7.68563211e-01],
              [-2.11804008e+00, -2.63963199e+00, -2.71025586e+00]],
    
             [[-4.97454464e-01, -1.84077692e+00, -1.13075355e-03],
              [-2.12281924e-02,  1.43575883e+00, -9.79906857e-01],
              [-1.43173182e+00, -1.10443759e+00, -1.83555901e+00]],
    
             [[ 6.83952451e-01,  3.86664987e+00,  6.27903759e-01],
              [ 6.22224391e-01,  3.38052392e+00,  2.65812469e+00],
              [ 1.35363007e+00, -1.32484972e+00,  2.16152740e+00]],
    
             [[-2.97609538e-01, -5.97289562e-01, -5.53929061e-02],
              [-9.01254416e-01, -1.31918341e-01, -1.91106975e+00],
              [ 1.30615933e-02, -1.13118947e+00, -1.71910405e+00]],
    
             [[-3.56180477e+00,  1.03958499e+00, -2.59528255e+00],
              [-3.63754392e-01,  1.45368779e+00,  6.28106117e-01],
              [-1.52019906e+00,  2.27045107e+00, -2.04589820e+00]],
    
             [[ 2.96379948e+00,  1.40205872e+00,  6.10626042e-01],
              [ 9.29273069e-01, -2.59484500e-01,  1.29350579e+00],
              [-2.03710818e+00,  2.09723279e-01,  3.75842363e-01]],
    
             [[ 1.15190208e+00, -1.79379475e+00, -1.03870857e+00],
              [-2.49877191e+00,  5.20503461e-01, -1.32148862e+00],
              [ 1.14259291e+00, -1.22499466e+00, -1.77996016e+00]],
    
             [[ 5.53968525e+00,  2.88090467e+00,  1.01117289e+00],
              [ 5.58917379e+00,  6.44941425e+00,  4.39829063e+00],
              [ 5.66234684e+00,  6.48445272e+00,  7.14439631e+00]],
    
             [[ 2.75992036e-01,  2.69333333e-01,  2.09721066e-02],
              [-3.83876115e-01, -8.62384975e-01, -9.11671594e-02],
              [ 6.93263173e-01,  1.74463049e-01,  4.79215592e-01]],
    
             [[-1.01199875e+01, -7.20881653e+00, -5.04845047e+00],
              [-6.25630283e+00, -1.05240383e+01, -2.73052502e+00],
              [-7.76849747e+00, -2.49891591e+00, -8.07278156e+00]],
    
             [[ 1.54215002e+00,  1.09585929e+00,  1.14009336e-01],
              [ 1.12563217e+00,  2.39603353e+00,  1.73558319e+00],
              [-3.81684572e-01,  5.00159383e-01,  1.24173117e+00]],
    
             [[-1.65010154e-01, -5.65712094e-01,  3.59763801e-02],
              [-3.90798420e-01, -1.16110936e-01, -1.36400402e-01],
              [-1.34565961e+00,  4.39721853e-01,  8.28600407e-01]],
    
             [[-4.84672832e+00, -6.60604596e-01,  1.73845172e-01],
              [-5.31565666e-01, -1.43216908e-01,  3.46095473e-01],
              [-2.08822680e+00, -1.05168688e+00, -1.98360145e-01]],
    
             [[ 1.07395852e+00,  1.13209188e+00, -5.66867292e-01],
              [ 8.76719356e-01, -8.19936633e-01,  1.26697469e+00],
              [-1.59920776e+00, -8.58387530e-01, -7.85739303e-01]]]])
    ================================================================================================ warnings summary =================================================================================================
    ../../../../anaconda3/envs/onnx-pytorch/lib/python3.9/site-packages/onnx/mapping.py:27
      <me>/anaconda3/envs/onnx-pytorch/lib/python3.9/site-packages/onnx/mapping.py:27: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. 
      Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
        int(TensorProto.STRING): np.dtype(np.object)
    
    onnx_pytorch/tests/test_base.py: 182 warnings
      <me>/anaconda3/envs/onnx-pytorch/lib/python3.9/site-packages/onnx/numpy_helper.py:93: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. 
      Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
        if arr.dtype == np.object:
    
    onnx_pytorch/tests/test_base.py::TestBase::test_conv_batchnorm_maxpool_flatten_add_relu
      <me>/anaconda3/envs/onnx-pytorch/lib/python3.9/site-packages/onnx/helper.py:365: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
        is_iterable = isinstance(value, collections.Iterable)
    
    onnx_pytorch/tests/test_base.py::TestBase::test_and
    onnx_pytorch/tests/test_base.py::TestBase::test_and
      /tmp/tmpms_osm8m/model.py:33: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
      Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
    
    onnx_pytorch/tests/test_base.py::TestBase::test_non_zero
      /tmp/tmpjqh2vsx2/model.py:33: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
      Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
    
    onnx_pytorch/tests/test_base.py::TestBase::test_resize_pt_bilinear
      <me>/anaconda3/envs/onnx-pytorch/lib/python3.9/site-packages/torch/nn/functional.py:3631: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
        warnings.warn(
    
    -- Docs: https://docs.pytest.org/en/stable/warnings.html
    ============================================================================================= short test summary info =============================================================================================
    FAILED onnx_pytorch/tests/test_base.py::TestBase::test_conv_batchnorm_maxpool_flatten_add_relu - assert False
    FAILED onnx_pytorch/tests/test_base.py::TestBase::test_batch_normalization - assert False
    ============================================================================== 2 failed, 84 passed, 2 skipped, 188 warnings in 1.47s ==============================================================================
    
    
    opened by helion-du-mas-des-bourboux-thales 2
  • Tensors in the converted model are being placed in the wrong device

    Tensors in the converted model are being placed in the wrong device

    I've converted a BiT model (https://tfhub.dev/google/bit/m-r101x1/1) from TF to ONNX, and then used this package to convert to Pytorch.

    The result works out-of-the-box in the CPU, I get the same outputs as the TF model. But when I try it in the GPU, I get some fatal errors saying that some ops are using tensors in different devices. Looking into the generated code, I see a lot of calls like these in forward(): t_323 = torch.tensor(t_321.shape)

    These are being created in the CPU, so operations with these tensors (when the input is in the GPU) result in error. I can fix it manually by changing all such calls to: torch.tensor(..., device=inputs[0].device), and then everything works well: the results are the same as TF, and the performance is also the same.

    opened by jorgemcgomes 2
  • change directory is missing

    change directory is missing

    https://github.com/fumihwh/onnx-pytorch/blob/29cd1dafb47e4e4bc598c700c44f53815e7b8c9a/README.md?plain=1#L19

    the command line block should be

    git clone https://github.com/fumihwh/onnx-pytorch.git
    cd onnx-pytorch
    pip install -r requirements.txt
    pip install -e .
    
    opened by londumas 1
  • input name in onnxruntime is hardcoded in README

    input name in onnxruntime is hardcoded in README

    https://github.com/fumihwh/onnx-pytorch/blob/29cd1dafb47e4e4bc598c700c44f53815e7b8c9a/README.md?plain=1#L87

    I would suggest changing the following line

    inputs = {"data": inp}
    

    to this one, in the README

    inputs = {session.get_inputs()[0].name: inp}
    

    This allows to adapt to a way larger variety of model, without hardcoding the input name.

    opened by londumas 1
  • DecodeError: Unexpected end-group tag.

    DecodeError: Unexpected end-group tag.

    Hi, I tried this tool for the first time

    I did it the following way:

    1. pip install onnx_pytorch
    2. from onnx_pytorch import code_gen

    3. code_gen.gen('resnet18-v2-7.onnx', './')

    But, there is an error about: DecodeError: Unexpected end-group tag.

    How to deal it?

    opened by xiaopengaia 1
  • OpCodeGenerator is unimplemented for Softplus

    OpCodeGenerator is unimplemented for Softplus

    When trying to convert a Yolov4 ONNX model with onnx-pytorch I get the following error. Seems to be an unimplemented OpCode for Softplus.

    WARNING:root:Cannot get default value for dilations of Conv. WARNING:root:Cannot get default value for kernel_shape of Conv. WARNING:root:Cannot get default value for pads of Conv. WARNING:root:Cannot get default value for strides of Conv. Traceback (most recent call last): File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/someenv/lib/python3.8/site-packages/onnx_pytorch/code_gen.py", line 378, in main() File "/someenv/python3.8/site-packages/onnx_pytorch/code_gen.py", line 368, in main gen(onnx_model=args.onnx_model_path, File "/someenv/python3.8/site-packages/onnx_pytorch/code_gen.py", line 291, in gen model_code_generator.run() File "/someenv/python3.8/site-packages/onnx_pytorch/code_gen.py", line 246, in run raise NotImplementedError( NotImplementedError: OpCodeGenerator is unimplemented for Softplus.

    Installed version:

    pip show onnx_pytorch Name: onnx-pytorch Version: 0.1.4 Summary: Convert ONNX to PyTorch code. Home-page: https://github.com/fumihwh/onnx-pytorch Author: fumihwh Author-email: [email protected] License: Apache 2.0 Location: /someenv/lib/python3.8/site-packages Requires: torchvision, setuptools, torch, PyYAML, tqdm, onnxruntime, onnx, sympy, pytest, numpy Required-by:

    opened by juhan 1
  • NotImplementedError: OpCodeGenerator is unimplemented for DequantizeLinear.

    NotImplementedError: OpCodeGenerator is unimplemented for DequantizeLinear.

    opened by LiuFeiOne 1
Releases(v0.1.5)
  • v0.1.5(Aug 3, 2022)

    What's Changed

    • create python publish action by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/42

    Full Changelog: https://github.com/fumihwh/onnx-pytorch/compare/v0.1.4...v0.1.5

    Source code(tar.gz)
    Source code(zip)
  • v0.1.4(Nov 23, 2021)

    What's Changed

    • Add some ops by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/13
    • Bump up to 0.1.3 by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/14
    • Add ops and model test cases by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/15
    • Support frcnn by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/16
    • Support mask rcnn, ssd and style transfer models by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/17
    • refactor: Small readability improvements by @rogier-stegeman in https://github.com/fumihwh/onnx-pytorch/pull/4
    • Fix CI by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/25
    • Some nit by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/24
    • add OP Elu/Sub/Tanh by @maimaixiong in https://github.com/fumihwh/onnx-pytorch/pull/19
    • Adds device information when creating new tensors by @jorgemcgomes in https://github.com/fumihwh/onnx-pytorch/pull/29
    • Ci by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/40
    • add version by @helion-du-mas-des-bourboux-thales in https://github.com/fumihwh/onnx-pytorch/pull/33
    • more general tutorial by @helion-du-mas-des-bourboux-thales in https://github.com/fumihwh/onnx-pytorch/pull/37
    • Fix dependencies by @helion-du-mas-des-bourboux-thales in https://github.com/fumihwh/onnx-pytorch/pull/35
    • Release 0.1.4 by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/41

    New Contributors

    • @rogier-stegeman made their first contribution in https://github.com/fumihwh/onnx-pytorch/pull/4
    • @maimaixiong made their first contribution in https://github.com/fumihwh/onnx-pytorch/pull/19
    • @jorgemcgomes made their first contribution in https://github.com/fumihwh/onnx-pytorch/pull/29
    • @helion-du-mas-des-bourboux-thales made their first contribution in https://github.com/fumihwh/onnx-pytorch/pull/33

    Full Changelog: https://github.com/fumihwh/onnx-pytorch/compare/v0.1.3...v0.1.4

    Source code(tar.gz)
    Source code(zip)
  • v0.1.3(Nov 18, 2021)

    What's Changed

    • Develop by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/1
    • Add tutorial and fix some bugs by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/2
    • Bump up to 0.1.2 by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/3
    • Introduce new features and some bug fix by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/5
    • Ci by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/6
    • Add some ops by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/7
    • Improve ci by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/8
    • Add some ops by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/9
    • Fix ops and use ParameterDict by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/10
    • Ci by @fumihwh in https://github.com/fumihwh/onnx-pytorch/pull/11

    Full Changelog: https://github.com/fumihwh/onnx-pytorch/compare/v0.1.2...v0.1.3

    Source code(tar.gz)
    Source code(zip)
Owner
Wenhao Hu
Wenhao Hu
An executor that loads ONNX models and embeds documents using the ONNX runtime.

ONNXEncoder An executor that loads ONNX models and embeds documents using the ONNX runtime. Usage via Docker image (recommended) from jina import Flow

Jina AI 2 Mar 15, 2022
ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

Ibai Gorordo 18 Nov 6, 2022
ONNX-PackNet-SfM: Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX

Python scripts for performing monocular depth estimation using the PackNet-SfM model in ONNX

Ibai Gorordo 14 Dec 9, 2022
A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or simply to separate onnx files to any size you want.

sne4onnx A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or

Katsuya Hyodo 10 Aug 30, 2022
A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for ONNX.

sam4onnx A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for

Katsuya Hyodo 6 May 15, 2022
Simple tool to combine(merge) onnx models. Simple Network Combine Tool for ONNX.

snc4onnx Simple tool to combine(merge) onnx models. Simple Network Combine Tool for ONNX. https://github.com/PINTO0309/simple-onnx-processing-tools 1.

Katsuya Hyodo 8 Oct 13, 2022
Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

scc4onnx Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel

Katsuya Hyodo 16 Dec 22, 2022
A code generator from ONNX to PyTorch code

onnx-pytorch Generating pytorch code from ONNX. Currently support onnx==1.9.0 and torch==1.8.1. Installation From PyPI pip install onnx-pytorch From

Wenhao Hu 94 Jan 6, 2023
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.

MMdnn MMdnn is a comprehensive and cross-framework tool to convert, visualize and diagnose deep learning (DL) models. The "MM" stands for model manage

Microsoft 5.7k Jan 9, 2023
PyTorch ,ONNX and TensorRT implementation of YOLOv4

PyTorch ,ONNX and TensorRT implementation of YOLOv4

null 4.2k Jan 1, 2023
YOLOv5 in PyTorch > ONNX > CoreML > TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices evolved over thousands of hours of training and evolution on anonymized client datasets. All code and models are under active development, and are subject to modification or deletion without notice.

Ultralytics 34.1k Dec 31, 2022
YOLOv3 in PyTorch > ONNX > CoreML > TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices

Ultralytics 9.3k Jan 7, 2023
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Peter Lin 6.5k Jan 4, 2023
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting (RVM) English | 中文 Official repository for the paper Robust High-Resolution Video Matting with Temporal Guidance. RVM is specific

flow-dev 2 Aug 21, 2022
This package proposes simplified exporting pytorch models to ONNX and TensorRT, and also gives some base interface for model inference.

PyTorch Infer Utils This package proposes simplified exporting pytorch models to ONNX and TensorRT, and also gives some base interface for model infer

Alex Gorodnitskiy 11 Mar 20, 2022
Convert onnx models to pytorch.

onnx2torch onnx2torch is an ONNX to PyTorch converter. Our converter: Is easy to use – Convert the ONNX model with the function call convert; Is easy

ENOT 264 Dec 30, 2022
Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Roxbili 5 Nov 19, 2022
BTC-Generator - BTC Generator With Python

Что такое BTC-Generator? Это генератор чеков всеми любимого @BTC_BANKER_BOT Для

DoomGod 3 Aug 24, 2022