module cli.replay#

Short summary#

module mlprodict.cli.replay

Command line about validation of prediction runtime.

source on GitHub

Functions#

function

truncated documentation

benchmark_replay

The command rerun a benchmark if models were stored by command line vaidate_runtime.

Documentation#

Command line about validation of prediction runtime.

source on GitHub

mlprodict.cli.replay.benchmark_replay(folder, runtime='python', time_kwargs=None, skip_long_test=True, time_kwargs_fact=None, time_limit=4, out=None, verbose=1, fLOG=<built-in function print>)#

The command rerun a benchmark if models were stored by command line vaidate_runtime.

Parameters:
  • folder – where to find pickled files

  • runtime – runtimes, comma separated list

  • verbose – integer from 0 (None) to 2 (full verbose)

  • out – output raw results into this file (excel format)

  • time_kwargs – a dictionary which defines the number of rows and the parameter number and repeat when benchmarking a model, the value must follow json format

  • skip_long_test – skips tests for high values of N if they seem too long

  • time_kwargs_fact – to multiply number and repeat in time_kwargs depending on the model (see _multiply_time_kwargs)

  • time_limit – to stop benchmarking after this limit of time

  • fLOG – logging function

Replays a benchmark of stored converted models by validate_runtime

The command rerun a benchmark if models were stored by command line vaidate_runtime.

Example:

python -m mlprodict benchmark_replay --folder dumped --out bench_results.xlsx

Parameter --time_kwargs may be used to reduce or increase bencharmak precisions. The following value tells the function to run a benchmarks with datasets of 1 or 10 number, to repeat a given number of time number predictions in one row. The total time is divided by number \times repeat`. Parameter --time_kwargs_fact may be used to increase these number for some specific models. 'lin' multiplies by 10 number when the model is linear.

-t "{\"1\":{\"number\":10,\"repeat\":10},\"10\":{\"number\":5,\"repeat\":5}}"

<<<

python -m mlprodict benchmark_replay --help

>>>

usage: benchmark_replay [-h] [-f FOLDER] [-r RUNTIME] [-t TIME_KWARGS]
                        [-s SKIP_LONG_TEST] [-ti TIME_KWARGS_FACT]
                        [-tim TIME_LIMIT] [--out OUT] [-v VERBOSE]

The command rerun a benchmark if models were stored by command line
`vaidate_runtime`.

optional arguments:
  -h, --help            show this help message and exit
  -f FOLDER, --folder FOLDER
                        where to find pickled files (default: None)
  -r RUNTIME, --runtime RUNTIME
                        runtimes, comma separated list (default: python)
  -t TIME_KWARGS, --time_kwargs TIME_KWARGS
                        a dictionary which defines the number of rows and the
                        parameter *number* and *repeat* when benchmarking a
                        model, the value must follow `json` format (default: )
  -s SKIP_LONG_TEST, --skip_long_test SKIP_LONG_TEST
                        skips tests for high values of N if they seem too long
                        (default: True)
  -ti TIME_KWARGS_FACT, --time_kwargs_fact TIME_KWARGS_FACT
                        to multiply number and repeat in *time_kwargs*
                        depending on the model (see
                        :func:`_multiply_time_kwargs <mlprodict.onnxrt.validat
                        e.validate_helper._multiply_time_kwargs>`) (default: )
  -tim TIME_LIMIT, --time_limit TIME_LIMIT
                        to stop benchmarking after this limit of time
                        (default: 4)
  --out OUT             output raw results into this file (excel format)
                        (default: )
  -v VERBOSE, --verbose VERBOSE
                        integer from 0 (None) to 2 (full verbose) (default: 1)

source on GitHub