module onnxrt.onnx_inference_node#

Inheritance diagram of mlprodict.onnxrt.onnx_inference_node

Short summary#

module mlprodict.onnxrt.onnx_inference_node

OnnxInferenceNode definition.

source on GitHub

Classes#

class

truncated documentation

OnnxInferenceNode

A node to execute.

Properties#

property

truncated documentation

inputs_args

Returns the list of arguments as well as the list of parameters with the default values (close to the signature). …

modified_args

Returns the list of modified parameters.

name

Returns the ONNX name.

python_inputs

Returns the python arguments.

Static Methods#

staticmethod

truncated documentation

_find_local_inputs

Determines the local inputs. It is any defined input used by the subgraph and defined in the parent graph.

_find_static_inputs

Determines the loop inputs. It is any defined inputs by the subgraphs + any result used as a constant in …

Methods#

method

truncated documentation

__init__

__repr__

usual

__str__

usual

_build_context

_init

Prepares the node.

_set_shape_inference_runtime

Updates values which shapes of the outputs.

_set_size_inference_runtime

Updates values which types of the outputs.

_set_type_inference_runtime

Updates values which types of the outputs.

add_variable_to_clean

Adds a variable which can be cleaned after the node execution.

enable_inplace_compute

Let the node know that one input can be overwritten.

get_local_inputs

Returns any local input used by this node in a subgraph defined as an attribute and not declared as an input of …

preprocess_parameters

Preprocesses the parameters, loads GraphProto (equivalent to ONNX graph with less metadata).

run

Runs the node. The function updates values with outputs.

set_order

Defines the order of execution.

setup_runtime

Loads runtime.

switch_initializers_dtype

Switches all initializers to numpy.float64. This only works if the runtime is 'python'.

to_python

Returns a python code for this operator.

Documentation#

OnnxInferenceNode definition.

source on GitHub

class mlprodict.onnxrt.onnx_inference_node.OnnxInferenceNode(onnx_node, desc, global_index)#

Bases: object

A node to execute.

Parameters:
  • onnx_node – onnx_node

  • desc – internal description

  • global_index – it is a function which returns a unique index for the output this operator generates

source on GitHub

class OnnxInferenceWrapper(oinf)#

Bases: object

Wraps OnnxInference in a wrapper and exposes the necessary function.

Parameters:

oinf – instance of OnnxInference

source on GitHub

__init__(oinf)#
property args_default#

Returns the list of default arguments.

property args_default_modified#

Returns the list of modified arguments.

property args_mandatory#

Returns the list of mandatory arguments.

property args_optional#

Returns the list of optional arguments.

enable_inplace_compute(index)#

Not implemented.

infer_sizes(*args)#

Calls infer_sizes.

infer_types(*args)#

Calls infer_types.

need_context()#

Needs context?

property obj#

Returns the ONNX graph.

run(*args, **kwargs)#

Calls run.

to_python(inputs, *args, **kwargs)#

Calls to_python.

__init__(onnx_node, desc, global_index)#
__repr__()#

usual

__str__()#

usual

_build_context(values, input_list)#
static _find_local_inputs(graph)#

Determines the local inputs. It is any defined input used by the subgraph and defined in the parent graph.

source on GitHub

static _find_static_inputs(body)#

Determines the loop inputs. It is any defined inputs by the subgraphs + any result used as a constant in the subgraphs.

source on GitHub

_init(global_index)#

Prepares the node.

source on GitHub

_set_shape_inference_runtime(values)#

Updates values which shapes of the outputs.

Parameters:

values – container for shapes

source on GitHub

_set_size_inference_runtime(values)#

Updates values which types of the outputs.

Parameters:

values – container for sizes

source on GitHub

_set_type_inference_runtime(values)#

Updates values which types of the outputs.

Parameters:

values – container for types

source on GitHub

add_variable_to_clean(name)#

Adds a variable which can be cleaned after the node execution.

source on GitHub

enable_inplace_compute(name)#

Let the node know that one input can be overwritten.

Parameters:

name – input name

source on GitHub

get_local_inputs()#

Returns any local input used by this node in a subgraph defined as an attribute and not declared as an input of this subgraph.

source on GitHub

property inputs_args#

Returns the list of arguments as well as the list of parameters with the default values (close to the signature).

source on GitHub

property modified_args#

Returns the list of modified parameters.

source on GitHub

property name#

Returns the ONNX name.

preprocess_parameters(runtime, rt_class, ir_version=None, target_opset=None, existing_functions=None)#

Preprocesses the parameters, loads GraphProto (equivalent to ONNX graph with less metadata).

Parameters:
  • runtime – runtime options

  • rt_class – runtime class used to compute prediction of subgraphs

  • ir_version – if not None, overwrites the default value

  • target_opset – use a specific target opset

  • existing_functions – existing functions

source on GitHub

property python_inputs#

Returns the python arguments.

source on GitHub

run(values, attributes=None, verbose=0, fLOG=None)#

Runs the node. The function updates values with outputs.

Parameters:
  • values – list of existing values

  • attributes – attributes known at function level

  • verbose – verbosity

  • fLOG – logging function

source on GitHub

set_order(order)#

Defines the order of execution.

source on GitHub

setup_runtime(runtime=None, variables=None, rt_class=None, target_opset=None, dtype=None, domain=None, ir_version=None, runtime_options=None, build_inference_node_function=None, existing_functions=None)#

Loads runtime.

Parameters:
  • runtime – runtime options

  • variables – registered variables created by previous operators

  • rt_class – runtime class used to compute prediction of subgraphs

  • target_opset – use a specific target opset

  • dtype – float computational type

  • domain – node domain

  • ir_version – if not None, changes the default value given by ONNX

  • runtime_options – runtime options

  • build_inference_node_function – function creating an inference runtime from an ONNX graph

  • existing_functions – existing function as a dictionary { (domain, name): fct }

Changed in version 0.9: Parameters build_inference_node_function and existing_functions were added.

source on GitHub

switch_initializers_dtype(dtype_in=<class 'numpy.float32'>, dtype_out=<class 'numpy.float64'>)#

Switches all initializers to numpy.float64. This only works if the runtime is 'python'.

Parameters:
  • dtype_in – previous type

  • dtype_out – next type

Returns:

done operations

source on GitHub

to_python(inputs)#

Returns a python code for this operator.

Parameters:

inputs – inputs name

Returns:

imports, python code, both as strings

source on GitHub