module onnxrt.ops_cpu.op_max#

Inheritance diagram of mlprodict.onnxrt.ops_cpu.op_max

Short summary#

module mlprodict.onnxrt.ops_cpu.op_max

Runtime operator.

source on GitHub

Classes#

class

truncated documentation

Max

Max === Element-wise max of each of the input tensors (with Numpy-style broadcasting support). All inputs and outputs …

Properties#

property

truncated documentation

args_default

Returns the list of arguments as well as the list of parameters with the default values (close to the signature). …

args_default_modified

Returns the list of modified parameters.

args_mandatory

Returns the list of optional arguments.

args_optional

Returns the list of optional arguments.

atts_value

Returns all parameters in a dictionary.

Methods#

method

truncated documentation

__init__

run

Documentation#

Runtime operator.

source on GitHub

class mlprodict.onnxrt.ops_cpu.op_max.Max(onnx_node, desc=None, **options)#

Bases: OpRunBinaryNumpy

===

Element-wise max of each of the input tensors (with Numpy-style broadcasting support). All inputs and outputs must have the same data type. This operator supports multidirectional (i.e., Numpy-style) broadcasting; for more details please check Broadcasting in ONNX.

Inputs

Between 1 and 2147483647 inputs.

  • data_0 (variadic, heterogeneous)T: List of tensors for max.

Outputs

  • max (heterogeneous)T: Output tensor.

Type Constraints

  • T tensor(uint8), tensor(uint16), tensor(uint32), tensor(uint64), tensor(int8), tensor(int16), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to numeric tensors.

Version

Onnx name: Max

This version of the operator has been available since version 13.

Runtime implementation: Max

__init__(onnx_node, desc=None, **options)#
run(*data, attributes=None, verbose=0, fLOG=None)#

Calls method _run.

source on GitHub