# Relu#

## Relu - 14#

Version

• name: Relu (GitHub)

• domain: main

• since_version: 14

• function: True

• support_level: SupportType.COMMON

• shape inference: True

This version of the operator has been available since version 14.

Summary

Relu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the rectified linear function, y = max(0, x), is applied to the tensor elementwise.

Inputs

• X (heterogeneous) - T: Input tensor

Outputs

• Y (heterogeneous) - T: Output tensor

Type Constraints

• T in ( tensor(bfloat16), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8) ): Constrain input and output types to signed numeric tensors.

Examples

default

```node = onnx.helper.make_node(
"Relu",
inputs=["x"],
outputs=["y"],
)
x = np.random.randn(3, 4, 5).astype(np.float32)
y = np.clip(x, 0, np.inf)

expect(node, inputs=[x], outputs=[y], name="test_relu")
```

Differences

 `0` `0` `Relu takes one input data (Tensor) and produces one output data` `Relu takes one input data (Tensor) and produces one output data` `1` `1` `(Tensor) where the rectified linear function, y = max(0, x), is applied to` `(Tensor) where the rectified linear function, y = max(0, x), is applied to` `2` `2` `the tensor elementwise.` `the tensor elementwise.` `3` `3` `4` `4` `**Inputs**` `**Inputs**` `5` `5` `6` `6` `* **X** (heterogeneous) - **T**:` `* **X** (heterogeneous) - **T**:` `7` `7` ` Input tensor` ` Input tensor` `8` `8` `9` `9` `**Outputs**` `**Outputs**` `10` `10` `11` `11` `* **Y** (heterogeneous) - **T**:` `* **Y** (heterogeneous) - **T**:` `12` `12` ` Output tensor` ` Output tensor` `13` `13` `14` `14` `**Type Constraints**` `**Type Constraints**` `15` `15` `16` `16` `* **T** in (` `* **T** in (` `17` `17` ` tensor(bfloat16),` ` tensor(bfloat16),` `18` `18` ` tensor(double),` ` tensor(double),` `19` `19` ` tensor(float),` ` tensor(float),` `20` `20` ` tensor(float16)` ` tensor(float16),` `21` ` tensor(int16),` `22` ` tensor(int32),` `23` ` tensor(int64),` `24` ` tensor(int8)` `21` `25` ` ):` ` ):` `22` `26` ` Constrain input and output types to float tensors.` ` Constrain input and output types to signed numeric tensors.`

## Relu - 13#

Version

• name: Relu (GitHub)

• domain: main

• since_version: 13

• function: False

• support_level: SupportType.COMMON

• shape inference: True

This version of the operator has been available since version 13.

Summary

Relu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the rectified linear function, y = max(0, x), is applied to the tensor elementwise.

Inputs

• X (heterogeneous) - T: Input tensor

Outputs

• Y (heterogeneous) - T: Output tensor

Type Constraints

• T in ( tensor(bfloat16), tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.

Differences

 `0` `0` `Relu takes one input data (Tensor) and produces one output data` `Relu takes one input data (Tensor) and produces one output data` `1` `1` `(Tensor) where the rectified linear function, y = max(0, x), is applied to` `(Tensor) where the rectified linear function, y = max(0, x), is applied to` `2` `2` `the tensor elementwise.` `the tensor elementwise.` `3` `3` `4` `4` `**Inputs**` `**Inputs**` `5` `5` `6` `6` `* **X** (heterogeneous) - **T**:` `* **X** (heterogeneous) - **T**:` `7` `7` ` Input tensor` ` Input tensor` `8` `8` `9` `9` `**Outputs**` `**Outputs**` `10` `10` `11` `11` `* **Y** (heterogeneous) - **T**:` `* **Y** (heterogeneous) - **T**:` `12` `12` ` Output tensor` ` Output tensor` `13` `13` `14` `14` `**Type Constraints**` `**Type Constraints**` `15` `15` `16` `16` `* **T** in (` `* **T** in (` `17` ` tensor(bfloat16),` `17` `18` ` tensor(double),` ` tensor(double),` `18` `19` ` tensor(float),` ` tensor(float),` `19` `20` ` tensor(float16)` ` tensor(float16)` `20` `21` ` ):` ` ):` `21` `22` ` Constrain input and output types to float tensors.` ` Constrain input and output types to float tensors.`

## Relu - 6#

Version

• name: Relu (GitHub)

• domain: main

• since_version: 6

• function: False

• support_level: SupportType.COMMON

• shape inference: True

This version of the operator has been available since version 6.

Summary

Relu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the rectified linear function, y = max(0, x), is applied to the tensor elementwise.

Inputs

• X (heterogeneous) - T: Input tensor

Outputs

• Y (heterogeneous) - T: Output tensor

Type Constraints

• T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.

Differences

 `0` `0` `Relu takes one input data (Tensor) and produces one output data` `Relu takes one input data (Tensor) and produces one output data` `1` `1` `(Tensor) where the rectified linear function, y = max(0, x), is applied to` `(Tensor) where the rectified linear function, y = max(0, x), is applied to` `2` `2` `the tensor elementwise.` `the tensor elementwise.` `3` `3` `4` `**Attributes**` `5` `6` `* **consumed_inputs**:` `7` ` legacy optimization attribute.` `8` `9` `4` `**Inputs**` `**Inputs**` `10` `5` `11` `6` `* **X** (heterogeneous) - **T**:` `* **X** (heterogeneous) - **T**:` `12` `7` ` Input tensor` ` Input tensor` `13` `8` `14` `9` `**Outputs**` `**Outputs**` `15` `10` `16` `11` `* **Y** (heterogeneous) - **T**:` `* **Y** (heterogeneous) - **T**:` `17` `12` ` Output tensor` ` Output tensor` `18` `13` `19` `14` `**Type Constraints**` `**Type Constraints**` `20` `15` `21` `16` `* **T** in (` `* **T** in (` `22` `17` ` tensor(double),` ` tensor(double),` `23` `18` ` tensor(float),` ` tensor(float),` `24` `19` ` tensor(float16)` ` tensor(float16)` `25` `20` ` ):` ` ):` `26` `21` ` Constrain input and output types to float tensors.` ` Constrain input and output types to float tensors.`

## Relu - 1#

Version

• name: Relu (GitHub)

• domain: main

• since_version: 1

• function: False

• support_level: SupportType.COMMON

• shape inference: False

This version of the operator has been available since version 1.

Summary

Relu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the rectified linear function, y = max(0, x), is applied to the tensor elementwise.

Attributes

• consumed_inputs: legacy optimization attribute.

Inputs

• X (heterogeneous) - T: Input tensor

Outputs

• Y (heterogeneous) - T: Output tensor

Type Constraints

• T in ( tensor(double), tensor(float), tensor(float16) ): Constrain input and output types to float tensors.