module utils.onnx_rewriter#

Short summary#

module onnxcustom.utils.onnx_rewriter

Rewrites operator in ONNX graph.

source on GitHub

Functions#

function

truncated documentation

_existing_names

Makes the list of existing names. Returns a set of unique names including intermediate results.

_onnx_rewrite_operator_node

Replaces a node by a subgraph.

_unique_name

Returns a name different from any name in existing_names.

onnx_rewrite_operator

Replaces one operator by an onnx graph.

unreduced_onnx_loss

Every loss function reduces the results to compute a loss. The score function needs to get the loss for every observation, …

Documentation#

Rewrites operator in ONNX graph.

source on GitHub

onnxcustom.utils.onnx_rewriter._existing_names(onx)#

Makes the list of existing names. Returns a set of unique names including intermediate results.

source on GitHub

onnxcustom.utils.onnx_rewriter._onnx_rewrite_operator_node(existing_names, node, sub_onx)#

Replaces a node by a subgraph.

Parameters:
  • existing_names – existing results names

  • node – onnx node to replace

  • sub_onx – onnx sub_graph to use as a replacement

Returns:

new_initializer, new_nodes

source on GitHub

onnxcustom.utils.onnx_rewriter._unique_name(existing_names, name)#

Returns a name different from any name in existing_names.

Parameters:
  • existing_names – set of names

  • name – current

Returns:

unique name

source on GitHub

onnxcustom.utils.onnx_rewriter.onnx_rewrite_operator(onx, op_type, sub_onx, recursive=True, debug_info=None)#

Replaces one operator by an onnx graph.

Parameters:
  • onx – onnx graph

  • op_type – operator type

  • sub_onx – onnx graph

  • recursive – looks into subgraphs

  • debug_info – unused

Returns:

modified onnx graph

<<<

import numpy
from skl2onnx.common.data_types import FloatTensorType
from skl2onnx.algebra.onnx_ops import (  # pylint: disable=E0611
    OnnxReciprocal, OnnxDiv)
from mlprodict.plotting.text_plot import onnx_simple_text_plot
from onnxcustom import get_max_opset
from onnxcustom.utils.onnx_rewriter import onnx_rewrite_operator

# first graph: it contains the node to replace
opset = get_max_opset()
node1 = OnnxReciprocal('X', output_names=['Y'],
                       op_version=opset)
onx1 = node1.to_onnx(
    inputs={'X': FloatTensorType()},
    outputs={'Y': FloatTensorType()},
    target_opset=opset)

# second graph: it contains the replacement graph
node2 = OnnxDiv(numpy.array([1], dtype=numpy.float32),
                'X', output_names=['Y'],
                op_version=opset)
onx2 = node2.to_onnx(
    inputs={'X': FloatTensorType()},
    outputs={'Y': FloatTensorType()},
    target_opset=opset)

# third graph: the modified graph
onx3 = onnx_rewrite_operator(onx1, 'Reciprocal', onx2)
print(onnx_simple_text_plot(onx3))

>>>

    somewhere/workspace/onnxcustom/onnxcustom_UT_39_std/_venv/lib/python3.9/site-packages/mlprodict/plotting/text_plot.py:452: DeprecationWarning: `mapping.TENSOR_TYPE_TO_NP_TYPE` is now deprecated and will be removed in the next release or so.To silence this warning, please use `helper.{self._future_function}` instead.
      return TENSOR_TYPE_TO_NP_TYPE[TensorProto.FLOAT]  # pylint: disable=E1101
    opset: domain='' version=13
    input: name='X' type=dtype('float32') shape=None
    init: name='Di_Divcst' type=dtype('float32') shape=(1,) -- array([1.], dtype=float32)
    Div(Di_Divcst, X) -> Y
    output: name='Y' type=dtype('float32') shape=None

source on GitHub

onnxcustom.utils.onnx_rewriter.unreduced_onnx_loss(onx, output_name='score')#

Every loss function reduces the results to compute a loss. The score function needs to get the loss for every observation, not the whole loss. This function looks for a reducing node and removes it before exposing the output as the only output.

Parameters:
  • onx – onx graph

  • output_name – new output name

Returns:

new onx graph

source on GitHub