module onnxrt.onnx_shape_inference#

Inheritance diagram of mlprodict.onnxrt.onnx_shape_inference

Short summary#

module mlprodict.onnxrt.onnx_shape_inference

Runtime to infer shapes.

Classes#

class

truncated documentation

OnnxShapeInference

Implements a micro runtime for ONNX graphs. It does not implements all the operator types.

Properties#

property

truncated documentation

input_names

Returns input names.

output_names

Returns output names.

Static Methods#

staticmethod

truncated documentation

_get_shape

Methods#

method

truncated documentation

__init__

__repr__

Usual

_run_empty

Computes shape and types of all results.

run

Runs shape inference and type given known inputs.

Documentation#

Runtime to infer shapes.

New in version 0.9.

source on GitHub

class mlprodict.onnxrt.onnx_shape_inference.OnnxShapeInference(model_onnx)#

Bases: object

Implements a micro runtime for ONNX graphs. It does not implements all the operator types.

Parameters:

model_onnx – ONNX model

Other attributes:

  • known_shapes_: shapes which can be inferred without any input

  • cache_: keeps track of the function used to infer the shapes

  • is_isfunction: tells if the graph is a function or a model

<<<

import pprint
import numpy
from mlprodict.onnxrt.onnx_shape_inference import OnnxShapeInference
from mlprodict.npy.xop_variable import Variable
from mlprodict.npy.xop import loadop

opset = 15
OnnxAdd = loadop('Add')
dtype = numpy.float32

cop = OnnxAdd('X', numpy.array(
    [[1]], dtype=dtype), op_version=opset)
cop4 = OnnxAdd(cop, numpy.array([[2]], dtype=dtype),
               output_names=['Y'])
vari = Variable('X', numpy.float32, [None, 3])
model_def = cop4.to_onnx([vari], run_shape=False)
rt = OnnxShapeInference(model_def)
out = rt.run()
pprint.pprint(out.get())

>>>

    somewhere/workspace/mlprodict/mlprodict_UT_39_std/_doc/sphinxdoc/source/mlprodict/npy/xop_variable.py:67: DeprecationWarning: `mapping.NP_TYPE_TO_TENSOR_TYPE` is now deprecated and will be removed in the next release or so.To silence this warning, please use `helper.{self._future_function}` instead.
      return NP_TYPE_TO_TENSOR_TYPE[dt]
    {'X': ShapeResult('X', ['_0', 3], dtype('float32')),
     'Y': ShapeResult('Y', ['_0', 3], dtype('float32')),
     'init': ShapeResult('init', [1, 1], dtype('float32')),
     'init_1': ShapeResult('init_1', [1, 1], dtype('float32')),
     'out_add_0': ShapeResult('out_add_0', ['_0', 3], dtype('float32'))}

source on GitHub

__init__(model_onnx)#
__repr__()#

Usual

static _get_shape(obj, known_shapes=None, result_name=None)#
_run_empty()#

Computes shape and types of all results.

Returns:

all intermediates results and output as a dictionary

source on GitHub

property input_names#

Returns input names.

property output_names#

Returns output names.

run(inputs=None)#

Runs shape inference and type given known inputs.

Parameters:

inputs – inputs

Returns:

all results

source on GitHub