module testing.test_utils.utils_backend_common_compare#

Short summary#

module mlprodict.testing.test_utils.utils_backend_common_compare

Inspired from sklearn-onnx, handles two backends.

source on GitHub

Functions#

function

truncated documentation

compare_runtime_session

The function compares the expected output (computed with the model before being converted to ONNX) and the ONNX output …

Documentation#

Inspired from sklearn-onnx, handles two backends.

source on GitHub

mlprodict.testing.test_utils.utils_backend_common_compare.compare_runtime_session(cls_session, test, decimal=5, options=None, verbose=False, context=None, comparable_outputs=None, intermediate_steps=False, classes=None, disable_optimisation=False)#

The function compares the expected output (computed with the model before being converted to ONNX) and the ONNX output produced with module onnxruntime or mlprodict.

Parameters:
  • cls_session – inference session instance (like OnnxInference)

  • test – dictionary with the following keys: - onnx: onnx model (filename or object) - expected: expected output (filename pkl or object) - data: input data (filename pkl or object)

  • decimal – precision of the comparison

  • options – comparison options

  • context – specifies custom operators

  • verbose – in case of error, the function may print more information on the standard output

  • comparable_outputs – compare only these outputs

  • intermediate_steps – displays intermediate steps in case of an error

  • classes – classes names (if option ‘nocl’ is used)

  • disable_optimisation – disable optimisation the runtime may do

Returns:

tuple (outut, lambda function to run the predictions)

The function does not return anything but raises an error if the comparison failed.

source on GitHub