npx, jit and eager mode#
API#
- onnx_array_api.npx.npx_core_api.var(*args: Sequence[Var], **kwargs: Dict[str, Any]) Var [source]#
Wraps a call to the building of class
Var
.
- onnx_array_api.npx.npx_core_api.cst(*args, **kwargs)[source]#
Wraps a call to the building of class
Cst
.
- onnx_array_api.npx.npx_jit_eager.eager_onnx(*args, **kwargs)[source]#
Returns an instance of
EagerOnnx
.
- onnx_array_api.npx.npx_core_api.make_tuple(n_elements_or_first_variable: int | Var, *args: Sequence[Var], **kwargs: Dict[str, Any]) Var [source]#
Wraps a call to the building of class
Tuple
. n_elements_or_first_variable is the number of elements in the tuple or the number of detected arguments if not specified.
- onnx_array_api.npx.npx_core_api.tuple_var(*args: Sequence[Var]) Var [source]#
Tie many results all together before being returned by a function.
JIT, Eager#
- class onnx_array_api.npx.npx_jit_eager.JitEager(f: Callable, tensor_class: type, target_opsets: Dict[str, int] | None = None, output_types: Dict[Any, TensorType] | None = None, ir_version: int | None = None)[source]#
Converts a function into an executable function based on a backend. The new function is converted to onnx on the first call.
- Parameters:
f – function to convert
tensor_class – wrapper around a class defining the backend, if None, it defaults to
onnx.reference.ReferenceEvalutor
target_opsets – dictionary {opset: version}
output_types – shape and type inference cannot be run before the onnx graph is created and type is needed to do such, if not specified, the class assumes there is only one output of the same type as the input
ir_version – defines the IR version to use
- property available_versions#
Returns the key used to distinguish between every jitted version.
- cast_from_tensor_class(results: List[EagerTensor]) Any | Tuple[Any] [source]#
Wraps input from self.tensor_class to python types.
- Parameters:
results – python inputs (including numpy)
- Returns:
wrapped inputs
- cast_to_tensor_class(inputs: List[Any]) List[EagerTensor] [source]#
Wraps input into self.tensor_class.
- Parameters:
inputs – python inputs (including numpy)
- Returns:
wrapped inputs
- get_onnx(key: int | None = None)[source]#
Returns the jitted function associated to one key. If key is None, the assumes there is only one available jitted function and it returns it.
- jit_call(*values, **kwargs)[source]#
The method builds a key which identifies the signature (input types + parameters value). It then checks if the function was already converted into ONNX from a previous. If not, it converts it and caches the results indexed by the previous key. Finally, it executes the onnx graph and returns the result or the results in a tuple if there are several.
- static make_key(*values, **kwargs)[source]#
Builds a key based on the input types and parameters. Every set of inputs or parameters producing the same key (or signature) must use the same compiled ONNX.
- move_input_to_kwargs(values: List[Any], kwargs: Dict[str, Any]) Tuple[List[Any], Dict[str, Any]] [source]#
Mandatory parameters not usually not named. Some inputs must be moved to the parameter list before calling ONNX.
- Parameters:
values – list of inputs
kwargs – dictionary of arguments
- Returns:
new values, new arguments
- property n_versions#
Returns the number of jitted functions. There is one per type and number of dimensions.
- class onnx_array_api.npx.npx_jit_eager.JitOnnx(f: Callable, tensor_class: type | None = None, target_opsets: Dict[str, int] | None = None, output_types: Dict[Any, TensorType] | None = None, ir_version: int | None = None)[source]#
Converts a function into an executable function based on a backend. The new function is converted to onnx on the first call.
- Parameters:
f – function to convert
tensor_class – wrapper around a class defining the backend, if None, it defaults to
onnx.reference.ReferenceEvalutor
target_opsets – dictionary {opset: version}
output_types – shape and type inference cannot be run before the onnx graph is created and type is needed to do such, if not specified, the class assumes there is only one output of the same type as the input
ir_version – defines the IR version to use