module onnxrt.ops_cpu.op_celu
#
Short summary#
module mlprodict.onnxrt.ops_cpu.op_celu
Runtime operator.
Classes#
class |
truncated documentation |
---|---|
Celu ==== Continuously Differentiable Exponential Linear Units: Perform the linear unit element-wise on the input tensor … |
Functions#
function |
truncated documentation |
---|---|
Computes function |
Properties#
property |
truncated documentation |
---|---|
|
Returns the list of arguments as well as the list of parameters with the default values (close to the signature). … |
|
Returns the list of modified parameters. |
|
Returns the list of optional arguments. |
|
Returns the list of optional arguments. |
|
Returns all parameters in a dictionary. |
Methods#
method |
truncated documentation |
---|---|
Documentation#
Runtime operator.
- class mlprodict.onnxrt.ops_cpu.op_celu.Celu(onnx_node, desc=None, **options)#
Bases:
OpRunUnaryNum
Continuously Differentiable Exponential Linear Units: Perform the linear unit element-wise on the input tensor X using formula:
`` max(0,x) + min(0,alpha*(exp(x/alpha)-1)) ``
Attributes
alpha: The Alpha value in Celu formula which control the shape of the unit. The default value is 1.0. Default value is
namealphaf1.0typeFLOAT
(FLOAT)
Inputs
X (heterogeneous)T: Input tensor
Outputs
Y (heterogeneous)T: Output tensor
Type Constraints
T tensor(float): Constrain input and output types to float32 tensors.
Version
Onnx name: Celu
This version of the operator has been available since version 12.
Runtime implementation:
Celu
- __init__(onnx_node, desc=None, **options)#
- _run(x, attributes=None, verbose=0, fLOG=None)#
Should be overwritten.
- _run_inplace(x)#
- to_python(inputs)#
Returns a python code equivalent to this operator.
- Parameters:
inputs – inputs name
- Returns:
imports, python code, both as strings
- mlprodict.onnxrt.ops_cpu.op_celu._vcelu1(x, alpha=1.0)#
- mlprodict.onnxrt.ops_cpu.op_celu.pycelu(x, alpha=1.0)#
Computes function
celu(x)
.