.. _l-onnx-doc-If: == If == .. contents:: :local: .. _l-onnx-op-if-16: If - 16 ======= **Version** * **name**: `If (GitHub) `_ * **domain**: **main** * **since_version**: **16** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 16**. **Summary** If conditional **Attributes** * **else_branch** (required): Graph to run if condition is false. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the then_branch. * **then_branch** (required): Graph to run if condition is true. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the else_branch. **Inputs** * **cond** (heterogeneous) - **B**: Condition for the if **Outputs** Between 1 and 2147483647 outputs. * **outputs** (variadic) - **V**: Values that are live-out to the enclosing scope. The return values in the `then_branch` and `else_branch` must be of the same data type. The `then_branch` and `else_branch` may produce tensors with the same element type and different shapes. If corresponding outputs from the then-branch and the else-branch have static shapes S1 and S2, then the shape of the corresponding output variable of the if- node (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the first output of `then_branch` is typed float tensor with shape [2] and the first output of `else_branch` is another float tensor with shape [3], If's first output should have (a) no shape set, or (b) a shape of rank 1 with neither `dim_value` nor `dim_param` set, or (c) a shape of rank 1 with a unique `dim_param`. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible. **Type Constraints** * **V** in ( optional(seq(tensor(bfloat16))), optional(seq(tensor(bool))), optional(seq(tensor(complex128))), optional(seq(tensor(complex64))), optional(seq(tensor(double))), optional(seq(tensor(float))), optional(seq(tensor(float16))), optional(seq(tensor(int16))), optional(seq(tensor(int32))), optional(seq(tensor(int64))), optional(seq(tensor(int8))), optional(seq(tensor(string))), optional(seq(tensor(uint16))), optional(seq(tensor(uint32))), optional(seq(tensor(uint64))), optional(seq(tensor(uint8))), optional(tensor(bfloat16)), optional(tensor(bool)), optional(tensor(complex128)), optional(tensor(complex64)), optional(tensor(double)), optional(tensor(float)), optional(tensor(float16)), optional(tensor(int16)), optional(tensor(int32)), optional(tensor(int64)), optional(tensor(int8)), optional(tensor(string)), optional(tensor(uint16)), optional(tensor(uint32)), optional(tensor(uint64)), optional(tensor(uint8)), seq(tensor(bfloat16)), seq(tensor(bool)), seq(tensor(complex128)), seq(tensor(complex64)), seq(tensor(double)), seq(tensor(float)), seq(tensor(float16)), seq(tensor(int16)), seq(tensor(int32)), seq(tensor(int64)), seq(tensor(int8)), seq(tensor(string)), seq(tensor(uint16)), seq(tensor(uint32)), seq(tensor(uint64)), seq(tensor(uint8)), tensor(bfloat16), tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): All Tensor, Sequence(Tensor), Optional(Tensor), and Optional(Sequence(Tensor)) types * **B** in ( tensor(bool) ): Only bool **Examples** **_if** :: # Given a bool scalar input cond. # return constant tensor x if cond is True, otherwise return constant tensor y. then_out = onnx.helper.make_tensor_value_info( "then_out", onnx.TensorProto.FLOAT, [5] ) else_out = onnx.helper.make_tensor_value_info( "else_out", onnx.TensorProto.FLOAT, [5] ) x = np.array([1, 2, 3, 4, 5]).astype(np.float32) y = np.array([5, 4, 3, 2, 1]).astype(np.float32) then_const_node = onnx.helper.make_node( "Constant", inputs=[], outputs=["then_out"], value=onnx.numpy_helper.from_array(x), ) else_const_node = onnx.helper.make_node( "Constant", inputs=[], outputs=["else_out"], value=onnx.numpy_helper.from_array(y), ) then_body = onnx.helper.make_graph( [then_const_node], "then_body", [], [then_out] ) else_body = onnx.helper.make_graph( [else_const_node], "else_body", [], [else_out] ) if_node = onnx.helper.make_node( "If", inputs=["cond"], outputs=["res"], then_branch=then_body, else_branch=else_body, ) cond = np.array(1).astype(bool) res = x if cond else y expect( if_node, inputs=[cond], outputs=[res], name="test_if", opset_imports=[onnx.helper.make_opsetid("", 11)], ) **_if_seq** :: # Given a bool scalar input cond. # return constant sequence x if cond is True, otherwise return constant sequence y. then_out = onnx.helper.make_tensor_sequence_value_info( "then_out", onnx.TensorProto.FLOAT, shape=[5] ) else_out = onnx.helper.make_tensor_sequence_value_info( "else_out", onnx.TensorProto.FLOAT, shape=[5] ) x = [np.array([1, 2, 3, 4, 5]).astype(np.float32)] y = [np.array([5, 4, 3, 2, 1]).astype(np.float32)] then_const_node = onnx.helper.make_node( "Constant", inputs=[], outputs=["x"], value=onnx.numpy_helper.from_array(x[0]), ) then_seq_node = onnx.helper.make_node( "SequenceConstruct", inputs=["x"], outputs=["then_out"] ) else_const_node = onnx.helper.make_node( "Constant", inputs=[], outputs=["y"], value=onnx.numpy_helper.from_array(y[0]), ) else_seq_node = onnx.helper.make_node( "SequenceConstruct", inputs=["y"], outputs=["else_out"] ) then_body = onnx.helper.make_graph( [then_const_node, then_seq_node], "then_body", [], [then_out] ) else_body = onnx.helper.make_graph( [else_const_node, else_seq_node], "else_body", [], [else_out] ) if_node = onnx.helper.make_node( "If", inputs=["cond"], outputs=["res"], then_branch=then_body, else_branch=else_body, ) cond = np.array(1).astype(bool) res = x if cond else y expect( if_node, inputs=[cond], outputs=[res], name="test_if_seq", opset_imports=[onnx.helper.make_opsetid("", 13)], ) **_if_optional** :: # Given a bool scalar input cond, return an empty optional sequence of # tensor if True, return an optional sequence with value x # (the input optional sequence) otherwise. ten_in_tp = onnx.helper.make_tensor_type_proto( onnx.TensorProto.FLOAT, shape=[5] ) seq_in_tp = onnx.helper.make_sequence_type_proto(ten_in_tp) then_out_tensor_tp = onnx.helper.make_tensor_type_proto( onnx.TensorProto.FLOAT, shape=[5] ) then_out_seq_tp = onnx.helper.make_sequence_type_proto(then_out_tensor_tp) then_out_opt_tp = onnx.helper.make_optional_type_proto(then_out_seq_tp) then_out = onnx.helper.make_value_info("optional_empty", then_out_opt_tp) else_out_tensor_tp = onnx.helper.make_tensor_type_proto( onnx.TensorProto.FLOAT, shape=[5] ) else_out_seq_tp = onnx.helper.make_sequence_type_proto(else_out_tensor_tp) else_out_opt_tp = onnx.helper.make_optional_type_proto(else_out_seq_tp) else_out = onnx.helper.make_value_info("else_opt", else_out_opt_tp) x = [np.array([1, 2, 3, 4, 5]).astype(np.float32)] cond = np.array(0).astype(bool) res = compute_if_outputs(x, cond) opt_empty_in = onnx.helper.make_node( "Optional", inputs=[], outputs=["optional_empty"], type=seq_in_tp ) then_body = onnx.helper.make_graph([opt_empty_in], "then_body", [], [then_out]) else_const_node = onnx.helper.make_node( "Constant", inputs=[], outputs=["x"], value=onnx.numpy_helper.from_array(x[0]), ) else_seq_node = onnx.helper.make_node( "SequenceConstruct", inputs=["x"], outputs=["else_seq"] ) else_optional_seq_node = onnx.helper.make_node( "Optional", inputs=["else_seq"], outputs=["else_opt"] ) else_body = onnx.helper.make_graph( [else_const_node, else_seq_node, else_optional_seq_node], "else_body", [], [else_out], ) if_node = onnx.helper.make_node( "If", inputs=["cond"], outputs=["sequence"], then_branch=then_body, else_branch=else_body, ) expect( if_node, inputs=[cond], outputs=[res], name="test_if_opt", output_type_protos=[else_out_opt_tp], opset_imports=[onnx.helper.make_opsetid("", 16)], ) **Differences** .. raw:: html
00If conditionalIf conditional
11
22**Attributes****Attributes**
33
44* **else_branch** (required):* **else_branch** (required):
55 Graph to run if condition is false. Has N outputs: values you wish Graph to run if condition is false. Has N outputs: values you wish
66 to be live-out to the enclosing scope. The number of outputs must to be live-out to the enclosing scope. The number of outputs must
77 match the number of outputs in the then_branch. match the number of outputs in the then_branch.
88* **then_branch** (required):* **then_branch** (required):
99 Graph to run if condition is true. Has N outputs: values you wish to Graph to run if condition is true. Has N outputs: values you wish to
1010 be live-out to the enclosing scope. The number of outputs must match be live-out to the enclosing scope. The number of outputs must match
1111 the number of outputs in the else_branch. the number of outputs in the else_branch.
1212
1313**Inputs****Inputs**
1414
1515* **cond** (heterogeneous) - **B**:* **cond** (heterogeneous) - **B**:
1616 Condition for the if Condition for the if
1717
1818**Outputs****Outputs**
1919
2020Between 1 and 2147483647 outputs.Between 1 and 2147483647 outputs.
2121
2222* **outputs** (variadic) - **V**:* **outputs** (variadic) - **V**:
2323 Values that are live-out to the enclosing scope. The return values Values that are live-out to the enclosing scope. The return values
2424 in the then_branch and else_branch must be of the same data in the then_branch and else_branch must be of the same data
2525 type. The then_branch and else_branch may produce tensors with type. The then_branch and else_branch may produce tensors with
2626 the same element type and different shapes. If corresponding outputs the same element type and different shapes. If corresponding outputs
2727 from the then-branch and the else-branch have static shapes S1 and from the then-branch and the else-branch have static shapes S1 and
2828 S2, then the shape of the corresponding output variable of the if- S2, then the shape of the corresponding output variable of the if-
2929 node (if present) must be compatible with both S1 and S2 as it node (if present) must be compatible with both S1 and S2 as it
3030 represents the union of both possible shapes.For example, if in a represents the union of both possible shapes.For example, if in a
3131 model file, the first output of then_branch is typed float tensor model file, the first output of then_branch is typed float tensor
3232 with shape [2] and the first output of else_branch is another with shape [2] and the first output of else_branch is another
3333 float tensor with shape [3], If's first output should have (a) no float tensor with shape [3], If's first output should have (a) no
3434 shape set, or (b) a shape of rank 1 with neither dim_value nor shape set, or (b) a shape of rank 1 with neither dim_value nor
3535 dim_param set, or (c) a shape of rank 1 with a unique dim_param. dim_param set, or (c) a shape of rank 1 with a unique dim_param.
3636 In contrast, the first output cannot have the shape [2] since [2] In contrast, the first output cannot have the shape [2] since [2]
3737 and [3] are not compatible. and [3] are not compatible.
3838
3939**Type Constraints****Type Constraints**
4040
4141* **V** in (* **V** in (
42 optional(seq(tensor(bfloat16))),
43 optional(seq(tensor(bool))),
44 optional(seq(tensor(complex128))),
45 optional(seq(tensor(complex64))),
46 optional(seq(tensor(double))),
47 optional(seq(tensor(float))),
48 optional(seq(tensor(float16))),
49 optional(seq(tensor(int16))),
50 optional(seq(tensor(int32))),
51 optional(seq(tensor(int64))),
52 optional(seq(tensor(int8))),
53 optional(seq(tensor(string))),
54 optional(seq(tensor(uint16))),
55 optional(seq(tensor(uint32))),
56 optional(seq(tensor(uint64))),
57 optional(seq(tensor(uint8))),
58 optional(tensor(bfloat16)),
59 optional(tensor(bool)),
60 optional(tensor(complex128)),
61 optional(tensor(complex64)),
62 optional(tensor(double)),
63 optional(tensor(float)),
64 optional(tensor(float16)),
65 optional(tensor(int16)),
66 optional(tensor(int32)),
67 optional(tensor(int64)),
68 optional(tensor(int8)),
69 optional(tensor(string)),
70 optional(tensor(uint16)),
71 optional(tensor(uint32)),
72 optional(tensor(uint64)),
73 optional(tensor(uint8)),
74 seq(tensor(bfloat16)),
4275 seq(tensor(bool)), seq(tensor(bool)),
4376 seq(tensor(complex128)), seq(tensor(complex128)),
4477 seq(tensor(complex64)), seq(tensor(complex64)),
4578 seq(tensor(double)), seq(tensor(double)),
4679 seq(tensor(float)), seq(tensor(float)),
4780 seq(tensor(float16)), seq(tensor(float16)),
4881 seq(tensor(int16)), seq(tensor(int16)),
4982 seq(tensor(int32)), seq(tensor(int32)),
5083 seq(tensor(int64)), seq(tensor(int64)),
5184 seq(tensor(int8)), seq(tensor(int8)),
5285 seq(tensor(string)), seq(tensor(string)),
5386 seq(tensor(uint16)), seq(tensor(uint16)),
5487 seq(tensor(uint32)), seq(tensor(uint32)),
5588 seq(tensor(uint64)), seq(tensor(uint64)),
5689 seq(tensor(uint8)), seq(tensor(uint8)),
90 tensor(bfloat16),
5791 tensor(bool), tensor(bool),
5892 tensor(complex128), tensor(complex128),
5993 tensor(complex64), tensor(complex64),
6094 tensor(double), tensor(double),
6195 tensor(float), tensor(float),
6296 tensor(float16), tensor(float16),
6397 tensor(int16), tensor(int16),
6498 tensor(int32), tensor(int32),
6599 tensor(int64), tensor(int64),
66100 tensor(int8), tensor(int8),
67101 tensor(string), tensor(string),
68102 tensor(uint16), tensor(uint16),
69103 tensor(uint32), tensor(uint32),
70104 tensor(uint64), tensor(uint64),
71105 tensor(uint8) tensor(uint8)
72106 ): ):
73107 All Tensor and Sequence types All Tensor, Sequence(Tensor), Optional(Tensor), and
108 Optional(Sequence(Tensor)) types
74109* **B** in (* **B** in (
75110 tensor(bool) tensor(bool)
76111 ): ):
77112 Only bool Only bool
.. _l-onnx-op-if-13: If - 13 ======= **Version** * **name**: `If (GitHub) `_ * **domain**: **main** * **since_version**: **13** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 13**. **Summary** If conditional **Attributes** * **else_branch** (required): Graph to run if condition is false. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the then_branch. * **then_branch** (required): Graph to run if condition is true. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the else_branch. **Inputs** * **cond** (heterogeneous) - **B**: Condition for the if **Outputs** Between 1 and 2147483647 outputs. * **outputs** (variadic) - **V**: Values that are live-out to the enclosing scope. The return values in the `then_branch` and `else_branch` must be of the same data type. The `then_branch` and `else_branch` may produce tensors with the same element type and different shapes. If corresponding outputs from the then-branch and the else-branch have static shapes S1 and S2, then the shape of the corresponding output variable of the if- node (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the first output of `then_branch` is typed float tensor with shape [2] and the first output of `else_branch` is another float tensor with shape [3], If's first output should have (a) no shape set, or (b) a shape of rank 1 with neither `dim_value` nor `dim_param` set, or (c) a shape of rank 1 with a unique `dim_param`. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible. **Type Constraints** * **V** in ( seq(tensor(bool)), seq(tensor(complex128)), seq(tensor(complex64)), seq(tensor(double)), seq(tensor(float)), seq(tensor(float16)), seq(tensor(int16)), seq(tensor(int32)), seq(tensor(int64)), seq(tensor(int8)), seq(tensor(string)), seq(tensor(uint16)), seq(tensor(uint32)), seq(tensor(uint64)), seq(tensor(uint8)), tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): All Tensor and Sequence types * **B** in ( tensor(bool) ): Only bool **Differences** .. raw:: html
00If conditionalIf conditional
11
22**Attributes****Attributes**
33
44* **else_branch** (required):* **else_branch** (required):
55 Graph to run if condition is false. Has N outputs: values you wish Graph to run if condition is false. Has N outputs: values you wish
66 to be live-out to the enclosing scope. The number of outputs must to be live-out to the enclosing scope. The number of outputs must
77 match the number of outputs in the then_branch. match the number of outputs in the then_branch.
88* **then_branch** (required):* **then_branch** (required):
99 Graph to run if condition is true. Has N outputs: values you wish to Graph to run if condition is true. Has N outputs: values you wish to
1010 be live-out to the enclosing scope. The number of outputs must match be live-out to the enclosing scope. The number of outputs must match
1111 the number of outputs in the else_branch. the number of outputs in the else_branch.
1212
1313**Inputs****Inputs**
1414
1515* **cond** (heterogeneous) - **B**:* **cond** (heterogeneous) - **B**:
1616 Condition for the if Condition for the if
1717
1818**Outputs****Outputs**
1919
2020Between 1 and 2147483647 outputs.Between 1 and 2147483647 outputs.
2121
2222* **outputs** (variadic) - **V**:* **outputs** (variadic) - **V**:
2323 Values that are live-out to the enclosing scope. The return values Values that are live-out to the enclosing scope. The return values
2424 in the then_branch and else_branch must be of the same data in the then_branch and else_branch must be of the same data
2525 type. The then_branch and else_branch may produce tensors with type. The then_branch and else_branch may produce tensors with
2626 the same element type and different shapes. If corresponding outputs the same element type and different shapes. If corresponding outputs
2727 from the then-branch and the else-branch have static shapes S1 and from the then-branch and the else-branch have static shapes S1 and
2828 S2, then the shape of the corresponding output variable of the if- S2, then the shape of the corresponding output variable of the if-
2929 node (if present) must be compatible with both S1 and S2 as it node (if present) must be compatible with both S1 and S2 as it
3030 represents the union of both possible shapes.For example, if in a represents the union of both possible shapes.For example, if in a
3131 model file, the first output of then_branch is typed float tensor model file, the first output of then_branch is typed float tensor
3232 with shape [2] and the first output of else_branch is another with shape [2] and the first output of else_branch is another
3333 float tensor with shape [3], If's first output should have (a) no float tensor with shape [3], If's first output should have (a) no
3434 shape set, or (b) a shape of rank 1 with neither dim_value nor shape set, or (b) a shape of rank 1 with neither dim_value nor
3535 dim_param set, or (c) a shape of rank 1 with a unique dim_param. dim_param set, or (c) a shape of rank 1 with a unique dim_param.
3636 In contrast, the first output cannot have the shape [2] since [2] In contrast, the first output cannot have the shape [2] since [2]
3737 and [3] are not compatible. and [3] are not compatible.
3838
3939**Type Constraints****Type Constraints**
4040
4141* **V** in (* **V** in (
42 seq(tensor(bool)),
43 seq(tensor(complex128)),
44 seq(tensor(complex64)),
45 seq(tensor(double)),
46 seq(tensor(float)),
47 seq(tensor(float16)),
48 seq(tensor(int16)),
49 seq(tensor(int32)),
50 seq(tensor(int64)),
51 seq(tensor(int8)),
52 seq(tensor(string)),
53 seq(tensor(uint16)),
54 seq(tensor(uint32)),
55 seq(tensor(uint64)),
56 seq(tensor(uint8)),
4257 tensor(bool), tensor(bool),
4358 tensor(complex128), tensor(complex128),
4459 tensor(complex64), tensor(complex64),
4560 tensor(double), tensor(double),
4661 tensor(float), tensor(float),
4762 tensor(float16), tensor(float16),
4863 tensor(int16), tensor(int16),
4964 tensor(int32), tensor(int32),
5065 tensor(int64), tensor(int64),
5166 tensor(int8), tensor(int8),
5267 tensor(string), tensor(string),
5368 tensor(uint16), tensor(uint16),
5469 tensor(uint32), tensor(uint32),
5570 tensor(uint64), tensor(uint64),
5671 tensor(uint8) tensor(uint8)
5772 ): ):
5873 All Tensor types All Tensor and Sequence types
5974* **B** in (* **B** in (
6075 tensor(bool) tensor(bool)
6176 ): ):
6277 Only bool Only bool
.. _l-onnx-op-if-11: If - 11 ======= **Version** * **name**: `If (GitHub) `_ * **domain**: **main** * **since_version**: **11** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 11**. **Summary** If conditional **Attributes** * **else_branch** (required): Graph to run if condition is false. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the then_branch. * **then_branch** (required): Graph to run if condition is true. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the else_branch. **Inputs** * **cond** (heterogeneous) - **B**: Condition for the if **Outputs** Between 1 and 2147483647 outputs. * **outputs** (variadic) - **V**: Values that are live-out to the enclosing scope. The return values in the `then_branch` and `else_branch` must be of the same data type. The `then_branch` and `else_branch` may produce tensors with the same element type and different shapes. If corresponding outputs from the then-branch and the else-branch have static shapes S1 and S2, then the shape of the corresponding output variable of the if- node (if present) must be compatible with both S1 and S2 as it represents the union of both possible shapes.For example, if in a model file, the first output of `then_branch` is typed float tensor with shape [2] and the first output of `else_branch` is another float tensor with shape [3], If's first output should have (a) no shape set, or (b) a shape of rank 1 with neither `dim_value` nor `dim_param` set, or (c) a shape of rank 1 with a unique `dim_param`. In contrast, the first output cannot have the shape [2] since [2] and [3] are not compatible. **Type Constraints** * **V** in ( tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): All Tensor types * **B** in ( tensor(bool) ): Only bool **Differences** .. raw:: html
00If conditionalIf conditional
11
22**Attributes****Attributes**
33
44* **else_branch** (required):* **else_branch** (required):
55 Graph to run if condition is false. Has N outputs: values you wish Graph to run if condition is false. Has N outputs: values you wish
66 to be live-out to the enclosing scope. The number of outputs must to be live-out to the enclosing scope. The number of outputs must
77 match the number of outputs in the then_branch. match the number of outputs in the then_branch.
88* **then_branch** (required):* **then_branch** (required):
99 Graph to run if condition is true. Has N outputs: values you wish to Graph to run if condition is true. Has N outputs: values you wish to
1010 be live-out to the enclosing scope. The number of outputs must match be live-out to the enclosing scope. The number of outputs must match
1111 the number of outputs in the else_branch. the number of outputs in the else_branch.
1212
1313**Inputs****Inputs**
1414
1515* **cond** (heterogeneous) - **B**:* **cond** (heterogeneous) - **B**:
1616 Condition for the if Condition for the if
1717
1818**Outputs****Outputs**
1919
2020Between 1 and 2147483647 outputs.Between 1 and 2147483647 outputs.
2121
2222* **outputs** (variadic) - **V**:* **outputs** (variadic) - **V**:
2323 Values that are live-out to the enclosing scope. The return values Values that are live-out to the enclosing scope. The return values
2424 in the then_branch and else_branch must be of the same shape and in the then_branch and else_branch must be of the same data
25 type. The then_branch and else_branch may produce tensors with
26 the same element type and different shapes. If corresponding outputs
27 from the then-branch and the else-branch have static shapes S1 and
28 S2, then the shape of the corresponding output variable of the if-
29 node (if present) must be compatible with both S1 and S2 as it
30 represents the union of both possible shapes.For example, if in a
2531 same data type. model file, the first output of then_branch is typed float tensor
32 with shape [2] and the first output of else_branch is another
33 float tensor with shape [3], If's first output should have (a) no
34 shape set, or (b) a shape of rank 1 with neither dim_value nor
35 dim_param set, or (c) a shape of rank 1 with a unique dim_param.
36 In contrast, the first output cannot have the shape [2] since [2]
37 and [3] are not compatible.
2638
2739**Type Constraints****Type Constraints**
2840
2941* **V** in (* **V** in (
3042 tensor(bool), tensor(bool),
3143 tensor(complex128), tensor(complex128),
3244 tensor(complex64), tensor(complex64),
3345 tensor(double), tensor(double),
3446 tensor(float), tensor(float),
3547 tensor(float16), tensor(float16),
3648 tensor(int16), tensor(int16),
3749 tensor(int32), tensor(int32),
3850 tensor(int64), tensor(int64),
3951 tensor(int8), tensor(int8),
4052 tensor(string), tensor(string),
4153 tensor(uint16), tensor(uint16),
4254 tensor(uint32), tensor(uint32),
4355 tensor(uint64), tensor(uint64),
4456 tensor(uint8) tensor(uint8)
4557 ): ):
4658 All Tensor types All Tensor types
4759* **B** in (* **B** in (
4860 tensor(bool) tensor(bool)
4961 ): ):
5062 Only bool Only bool
.. _l-onnx-op-if-1: If - 1 ====== **Version** * **name**: `If (GitHub) `_ * **domain**: **main** * **since_version**: **1** * **function**: False * **support_level**: SupportType.COMMON * **shape inference**: True This version of the operator has been available **since version 1**. **Summary** If conditional **Attributes** * **else_branch** (required): Graph to run if condition is false. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the then_branch. * **then_branch** (required): Graph to run if condition is true. Has N outputs: values you wish to be live-out to the enclosing scope. The number of outputs must match the number of outputs in the else_branch. **Inputs** * **cond** (heterogeneous) - **B**: Condition for the if **Outputs** Between 1 and 2147483647 outputs. * **outputs** (variadic) - **V**: Values that are live-out to the enclosing scope. The return values in the `then_branch` and `else_branch` must be of the same shape and same data type. **Type Constraints** * **V** in ( tensor(bool), tensor(complex128), tensor(complex64), tensor(double), tensor(float), tensor(float16), tensor(int16), tensor(int32), tensor(int64), tensor(int8), tensor(string), tensor(uint16), tensor(uint32), tensor(uint64), tensor(uint8) ): All Tensor types * **B** in ( tensor(bool) ): Only bool