module onnxtorch.tchrun#

Inheritance diagram of deeponnxcustom.onnxtorch.tchrun

Short summary#

module deeponnxcustom.onnxtorch.tchrun

Executes ONNX graph with pytorch.

source on GitHub

Classes#

class

truncated documentation

_function_OnnxTorchRuntime

OnnxTorchRuntime

Executes ONNX graph using torch function. This is a very simple runtime. It goes through every node in the …

Static Methods#

staticmethod

truncated documentation

_concat

_extract_atts

Builds a dictionary with all attributes

_extract_init

Builds a dictionary with all initializers converted into torch arrays.

_gather

_gemm

_reduceprod

_reducesum

_reshape

_shape

_squeeze

_transpose

_unqueeze

Methods#

method

truncated documentation

__init__

_run_op

Executes a node with pytorch. Returns a dictionary.

run

Executes the ONNX graph.

Documentation#

Executes ONNX graph with pytorch.

source on GitHub

class deeponnxcustom.onnxtorch.tchrun.OnnxTorchRuntime(onnx_model)#

Bases: object

Executes ONNX graph using torch function. This is a very simple runtime. It goes through every node in the ONNX graph and execute with the corresponding torch functions.

Parameters

onnx_model – ONNX model

The class is very basic. It does not handle subgraphs and supports a limited number of operators.

<<<

import pprint
from deeponnxcustom.onnxtorch.tchrun import OnnxTorchRuntime

pprint.pprint(list(sorted(OnnxTorchRuntime._mapping)))

>>>

    ['Concat',
     'Gather',
     'Gemm',
     'Identity',
     'MatMul',
     'Max',
     'ReduceProd',
     'ReduceSum',
     'Reshape',
     'Shape',
     'Squeeze',
     'Transpose',
     'Unsqueeze']

source on GitHub

__init__(onnx_model)#
static _extract_atts(onnx_model)#

Builds a dictionary with all attributes

source on GitHub

static _extract_init(onnx_model)#

Builds a dictionary with all initializers converted into torch arrays.

source on GitHub

_mapping = {'Concat': <function _function_OnnxTorchRuntime._concat>, 'Gather': <function _function_OnnxTorchRuntime._gather>, 'Gemm': <function _function_OnnxTorchRuntime._gemm>, 'Identity': <function OnnxTorchRuntime.<lambda>>, 'MatMul': <built-in method matmul of type object>, 'Max': <built-in method max of type object>, 'ReduceProd': <function _function_OnnxTorchRuntime._reduceprod>, 'ReduceSum': <function _function_OnnxTorchRuntime._reducesum>, 'Reshape': <function _function_OnnxTorchRuntime._reshape>, 'Shape': <function _function_OnnxTorchRuntime._shape>, 'Squeeze': <function _function_OnnxTorchRuntime._squeeze>, 'Transpose': <function _function_OnnxTorchRuntime._transpose>, 'Unsqueeze': <function _function_OnnxTorchRuntime._unqueeze>}#
_run_op(node_name, node, *inputs)#

Executes a node with pytorch. Returns a dictionary.

source on GitHub

run(*inputs, verbose=False)#

Executes the ONNX graph.

Parameters
  • inputs – inputs of the function

  • verbose – displays more information while running the graph

Returns

a result or a tuple of results

source on GitHub

class deeponnxcustom.onnxtorch.tchrun._function_OnnxTorchRuntime#

Bases: object

static _concat(*tensors, axis=0)#
static _gather(t, indices, axis=0)#
static _gemm(a, b, c=None, alpha=1, beta=0, transA=False, transB=False)#
static _reduceprod(data, axes=None, keepdims=1)#
static _reducesum(data, axes=None, keepdims=1)#
static _reshape(t, shape)#
static _shape(t)#
static _squeeze(data, axes=None)#
static _transpose(t, perm)#
static _unqueeze(t, dim)#