module onnxrt.ops_onnx._op#

Inheritance diagram of mlprodict.onnxrt.ops_onnx._op

Short summary#

module mlprodict.onnxrt.ops_onnx._op

Additional methods for the extension of ReferenceEvaluator.

source on GitHub

Classes#

class

truncated documentation

OpRunExtended

Base class to cache C++ implementation based on inputs.

Properties#

property

truncated documentation

domain

Returns node attribute domain.

input

Returns node attribute input.

op_type

Returns node attribute op_type.

output

Returns node attribute output.

Methods#

method

truncated documentation

__init__

cache_impl

Caches an implementation.

get_cache_impl

Returns the cached implementation for key key.

get_cache_key

Returns a key mapped to the corresponding C++ implementation.

has_cache_key

Tells if a key belongs to the cache.

Documentation#

Additional methods for the extension of ReferenceEvaluator.

source on GitHub

class mlprodict.onnxrt.ops_onnx._op.OpRunExtended(onnx_node: NodeProto, run_params: Dict[str, Any])#

Bases: OpRun

Base class to cache C++ implementation based on inputs.

source on GitHub

__abstractmethods__ = frozenset({'_run'})#
__init__(onnx_node: NodeProto, run_params: Dict[str, Any])#
_abc_impl = <_abc._abc_data object>#
cache_impl(key, rt)#

Caches an implementation.

source on GitHub

get_cache_impl(key)#

Returns the cached implementation for key key.

source on GitHub

get_cache_key(**kwargs)#

Returns a key mapped to the corresponding C++ implementation.

source on GitHub

has_cache_key(key)#

Tells if a key belongs to the cache.

source on GitHub