module onnxrt.ops_cpu.op_softmax
#
Short summary#
module mlprodict.onnxrt.ops_cpu.op_softmax
Runtime operator.
Classes#
class |
truncated documentation |
---|---|
Softmax ======= The operator computes the normalized exponential values for the given input: Softmax(input, axis) = … |
|
Softmax ======= The operator computes the normalized exponential values for the given input: Softmax(input, axis) = … |
|
SoftmaxGrad computes |
|
SoftmaxGrad computes |
|
Defines a schema for operators added in this package such as |
Properties#
property |
truncated documentation |
---|---|
|
Returns the list of arguments as well as the list of parameters with the default values (close to the signature). … |
|
Returns the list of arguments as well as the list of parameters with the default values (close to the signature). … |
|
Returns the list of arguments as well as the list of parameters with the default values (close to the signature). … |
|
Returns the list of arguments as well as the list of parameters with the default values (close to the signature). … |
|
Returns the list of arguments as well as the list of parameters with the default values (close to the signature). … |
|
Returns the list of arguments as well as the list of parameters with the default values (close to the signature). … |
|
Returns the list of modified parameters. |
|
Returns the list of modified parameters. |
|
Returns the list of modified parameters. |
|
Returns the list of modified parameters. |
|
Returns the list of modified parameters. |
|
Returns the list of modified parameters. |
|
Returns the list of optional arguments. |
|
Returns the list of optional arguments. |
|
Returns the list of optional arguments. |
|
Returns the list of optional arguments. |
|
Returns the list of optional arguments. |
|
Returns the list of optional arguments. |
|
Returns the list of optional arguments. |
|
Returns the list of optional arguments. |
|
Returns the list of optional arguments. |
|
Returns the list of optional arguments. |
|
Returns the list of optional arguments. |
|
Returns the list of optional arguments. |
|
Returns all parameters in a dictionary. |
|
Returns all parameters in a dictionary. |
|
Returns all parameters in a dictionary. |
|
Returns all parameters in a dictionary. |
|
Returns all parameters in a dictionary. |
|
Returns all parameters in a dictionary. |
Methods#
method |
truncated documentation |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Documentation#
Runtime operator.
- mlprodict.onnxrt.ops_cpu.op_softmax.Softmax#
alias of
Softmax_13
- mlprodict.onnxrt.ops_cpu.op_softmax.SoftmaxGrad#
alias of
SoftmaxGrad_13
- class mlprodict.onnxrt.ops_cpu.op_softmax.SoftmaxGradSchema#
Bases:
OperatorSchema
Defines a schema for operators added in this package such as
SoftmaxGrad_13
.- __init__()#
- class mlprodict.onnxrt.ops_cpu.op_softmax.SoftmaxGrad_13(onnx_node, desc=None, **options)#
Bases:
OpRunBinaryNum
SoftmaxGrad computes
. ONNX does not have a dot product, which can be simulated as a pointwise-multiplication (“Mul”), followed by a “ReduceSum”. Unfortunately, the treatment of “axis” is different in “SoftmaxGrad” and “ReduceSum”. If axis=k for SoftmaxGrad, we need to specify [k, …, n-1] as the axes of reduction for “ReduceSum”, after accounting for negative-axis specification. An alternative solution would be to Flatten inputs to 2D and then reshape output back to original shape. Hopefully, many of these ops can be optimized away in the common-case of statically-known shapes.
- __init__(onnx_node, desc=None, **options)#
- _find_custom_operator_schema(op_name)#
- _run(grad, prob, attributes=None, verbose=0, fLOG=None)#
Should be overwritten.
- class mlprodict.onnxrt.ops_cpu.op_softmax.Softmax_1(onnx_node, desc=None, **options)#
Bases:
_Softmax
- __init__(onnx_node, desc=None, **options)#
- class mlprodict.onnxrt.ops_cpu.op_softmax.Softmax_13(onnx_node, desc=None, **options)#
Bases:
_Softmax
Softmax#
The operator computes the normalized exponential values for the given input:
Softmax(input, axis) = Exp(input) / ReduceSum(Exp(input), axis=axis, keepdims=1)
The “axis” attribute indicates the dimension along which Softmax will be performed. The output tensor has the same shape and contains the Softmax values of the corresponding input.
Attributes
axis:
Describes the dimension Softmax will be performed on. Negative value means counting dimensions from the back. Accepted range is [-r, r-1] where r = rank(input).
Default value is
nameaxisi-1typeINT
(INT)Inputs
input (heterogeneous)T: The input tensor of rank >= axis.
Outputs
output (heterogeneous)T: The output values with the same shape as the input tensor.
Type Constraints
T tensor(float16), tensor(float), tensor(double), tensor(bfloat16): Constrain input and output types to float tensors.
Version
Onnx name: Softmax
This version of the operator has been available since version 13.
Runtime implementation:
Softmax
- __init__(onnx_node, desc=None, **options)#
- class mlprodict.onnxrt.ops_cpu.op_softmax._Softmax(onnx_node, desc=None, expected_attributes=None, **options)#
Bases:
OpRunUnaryNum
- __init__(onnx_node, desc=None, expected_attributes=None, **options)#
- _run(X, attributes=None, verbose=0, fLOG=None)#
Should be overwritten.
- _run_inplace(X)#
- to_python(inputs)#
Returns a python code equivalent to this operator.
- Parameters:
inputs – inputs name
- Returns:
imports, python code, both as strings