module onnxrt.ops_cpu.op_softmax#

Inheritance diagram of mlprodict.onnxrt.ops_cpu.op_softmax

Short summary#

module mlprodict.onnxrt.ops_cpu.op_softmax

Runtime operator.

source on GitHub

Classes#

class

truncated documentation

Softmax

Softmax ======= The operator computes the normalized exponential values for the given input: Softmax(input, axis) = …

SoftmaxGrad_13

SoftmaxGrad computes dX = Y * ( dY - ReduceSum(Y * dY)). ONNX does not have a dot product, which can be …

SoftmaxGrad_13

SoftmaxGrad computes dX = Y * ( dY - ReduceSum(Y * dY)). ONNX does not have a dot product, which can be …

SoftmaxGradSchema

Defines a schema for operators added in this package such as SoftmaxGrad_13.

Properties#

property

truncated documentation

args_default

Returns the list of arguments as well as the list of parameters with the default values (close to the signature). …

args_default

Returns the list of arguments as well as the list of parameters with the default values (close to the signature). …

args_default

Returns the list of arguments as well as the list of parameters with the default values (close to the signature). …

args_default_modified

Returns the list of modified parameters.

args_default_modified

Returns the list of modified parameters.

args_default_modified

Returns the list of modified parameters.

args_mandatory

Returns the list of optional arguments.

args_mandatory

Returns the list of optional arguments.

args_mandatory

Returns the list of optional arguments.

args_optional

Returns the list of optional arguments.

args_optional

Returns the list of optional arguments.

args_optional

Returns the list of optional arguments.

atts_value

Returns all parameters in a dictionary.

atts_value

Returns all parameters in a dictionary.

atts_value

Returns all parameters in a dictionary.

Methods#

method

truncated documentation

__init__

__init__

__init__

__init__

_find_custom_operator_schema

_find_custom_operator_schema

_run

_run

_run

_run_inplace

to_python

Documentation#

Runtime operator.

source on GitHub

class mlprodict.onnxrt.ops_cpu.op_softmax.Softmax(onnx_node, desc=None, **options)#

Bases: OpRunUnaryNum


The operator computes the normalized exponential values for the given input:

Softmax(input, axis) = Exp(input) / ReduceSum(Exp(input), axis=axis, keepdims=1)

The “axis” attribute indicates the dimension along which Softmax will be performed. The output tensor has the same shape and contains the Softmax values of the corresponding input.

Attributes

  • axis:

Describes the dimension Softmax will be performed on. Negative value means counting dimensions from the back. Accepted range is [-r, r-1] where r = rank(input).

Default value is nameaxisi-1typeINT (INT)

Inputs

  • input (heterogeneous)T: The input tensor of rank >= axis.

Outputs

  • output (heterogeneous)T: The output values with the same shape as the input tensor.

Type Constraints

  • T tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to float tensors.

Version

Onnx name: Softmax

This version of the operator has been available since version 13.

Runtime implementation: Softmax

__init__(onnx_node, desc=None, **options)#
_run(X, attributes=None, verbose=0, fLOG=None)#

Should be overwritten.

source on GitHub

_run_inplace(X)#
to_python(inputs)#

Returns a python code equivalent to this operator.

Parameters:

inputs – inputs name

Returns:

imports, python code, both as strings

source on GitHub

mlprodict.onnxrt.ops_cpu.op_softmax.SoftmaxGrad#

alias of SoftmaxGrad_13

class mlprodict.onnxrt.ops_cpu.op_softmax.SoftmaxGradSchema#

Bases: OperatorSchema

Defines a schema for operators added in this package such as SoftmaxGrad_13.

source on GitHub

__init__()#
class mlprodict.onnxrt.ops_cpu.op_softmax.SoftmaxGrad_13(onnx_node, desc=None, **options)#

Bases: OpRunBinaryNum

SoftmaxGrad computes dX = Y * ( dY - ReduceSum(Y * dY)). ONNX does not have a dot product, which can be simulated as a pointwise-multiplication (“Mul”), followed by a “ReduceSum”. Unfortunately, the treatment of “axis” is different in “SoftmaxGrad” and “ReduceSum”. If axis=k for SoftmaxGrad, we need to specify [k, …, n-1] as the axes of reduction for “ReduceSum”, after accounting for negative-axis specification. An alternative solution would be to Flatten inputs to 2D and then reshape output back to original shape. Hopefully, many of these ops can be optimized away in the common-case of statically-known shapes.

source on GitHub

__init__(onnx_node, desc=None, **options)#
_find_custom_operator_schema(op_name)#
_run(grad, prob, attributes=None, verbose=0, fLOG=None)#

Should be overwritten.

source on GitHub