module onnx_conv.onnx_ops.onnx_gradient_op
#
Short summary#
module mlprodict.onnx_conv.onnx_ops.onnx_gradient_op
Custom operators for gradient numbers.
Classes#
class |
truncated documentation |
---|---|
Defines a custom operator for BroadcastGradientArgs. Returns the reduction axes for computing gradients of s0 op s1 … |
|
Defines a custom operator for BroadcastGradientArgs. Returns the reduction axes for computing gradients of s0 op s1 … |
|
MatMul and Gemm without a C. |
|
MatMul and Gemm without a C. |
|
Gradient of Softmax. SoftmaxGrad computes |
|
Gradient of Softmax. SoftmaxGrad computes |
|
Defines a custom operator for YieldOp. |
|
Defines a custom operator for YieldOp. |
Properties#
property |
truncated documentation |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Returns the outputs of the node. |
|
Returns the outputs of the node. |
|
Returns the outputs of the node. |
|
Returns the outputs of the node. |
|
Returns the outputs of the node. |
|
Returns the outputs of the node. |
|
Returns the outputs of the node. |
|
Returns the outputs of the node. |
Methods#
method |
truncated documentation |
---|---|
Documentation#
Custom operators for gradient numbers.
- mlprodict.onnx_conv.onnx_ops.onnx_gradient_op.OnnxBroadcastGradientArgs#
alias of
OnnxBroadcastGradientArgs_1
- class mlprodict.onnx_conv.onnx_ops.onnx_gradient_op.OnnxBroadcastGradientArgs_1(a_shape, b_shape, op_version=None, **kwargs)#
Bases:
OnnxOperator
Defines a custom operator for BroadcastGradientArgs. Returns the reduction axes for computing gradients of s0 op s1 with broadcast. The ouput axes are deterministic from last to first. Output is an empty vector when no reduction is necessary for the corresponding input.
- Parameters:
a_shape – The 1st input shape as Tensor.
b_shape – The 2nds input shape as Tensor.
op_version – opset version
kwargs – additional parameter
- __init__(a_shape, b_shape, op_version=None, **kwargs)#
- Parameters:
a_shape – The 1st input shape as Tensor.
b_shape – The 2nds input shape as Tensor.
op_version – opset version
kwargs – additional parameter
- mlprodict.onnx_conv.onnx_ops.onnx_gradient_op.OnnxFusedMatMul#
alias of
OnnxFusedMatMul_1
- class mlprodict.onnx_conv.onnx_ops.onnx_gradient_op.OnnxFusedMatMul_1(X, Y, transA=0, transB=0, op_version=None, **kwargs)#
Bases:
OnnxOperator
MatMul and Gemm without a C.
- Parameters:
X – first matrix
Y – second matrix
transA – transpose first matrix
transB – transpose second matrix
op_version – opset version
kwargs – additional parameter
- __init__(X, Y, transA=0, transB=0, op_version=None, **kwargs)#
- Parameters:
X – first matrix
Y – second matrix
transA – transpose first matrix
transB – transpose second matrix
op_version – opset version
kwargs – additional parameter
- mlprodict.onnx_conv.onnx_ops.onnx_gradient_op.OnnxSoftmaxGrad#
alias of
OnnxSoftmaxGrad_13
- class mlprodict.onnx_conv.onnx_ops.onnx_gradient_op.OnnxSoftmaxGrad_13(grad, prob, op_version=None, **kwargs)#
Bases:
OnnxOperator
Gradient of Softmax. SoftmaxGrad computes
. ONNX does not have a dot product, which can be simulated as a pointwise-multiplication (“Mul”), followed by a “ReduceSum”. Unfortunately, the treatment of “axis” is different in “SoftmaxGrad” and “ReduceSum”. If axis=k for SoftmaxGrad, we need to specify [k, …, n-1] as the axes of reduction for “ReduceSum”, after accounting for negative-axis specification. An alternative solution would be to Flatten inputs to 2D and then reshape output back to original shape. Hopefully, many of these ops can be optimized away in the common-case of statically-known shapes.
- Parameters:
grad – gradient
prob – probablities
op_version – opset version
kwargs – additional parameter
- __init__(grad, prob, op_version=None, **kwargs)#
- Parameters:
grad – gradient
prob – probablities
op_version – opset version
kwargs – additional parameter
- mlprodict.onnx_conv.onnx_ops.onnx_gradient_op.OnnxYieldOp#
alias of
OnnxYieldOp_1
- class mlprodict.onnx_conv.onnx_ops.onnx_gradient_op.OnnxYieldOp_1(X, non_differentiable_outputs=None, full_shape_outputs=None, op_version=None, **kwargs)#
Bases:
OnnxOperator
Defines a custom operator for YieldOp.
- Parameters:
X – array or OnnxOperatorMixin
non_differentiable_outputs – the indices of the module outputs that doesn’t have a gradient.
full_shape_outputs – the indices of the module outputs that must have full shape.
op_version – opset version
kwargs – additional parameter
- __init__(X, non_differentiable_outputs=None, full_shape_outputs=None, op_version=None, **kwargs)#
- Parameters:
X – array or OnnxOperatorMixin
non_differentiable_outputs – the indices of the module outputs that doesn’t have a gradient.
full_shape_outputs – the indices of the module outputs that must have full shape.
op_version – opset version
kwargs – additional parameter