.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_tutorial/plot_bbegin_measure_time.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_tutorial_plot_bbegin_measure_time.py: Benchmark ONNX conversion ========================= .. index:: benchmark Example :ref:`l-simple-deploy-1` converts a simple model. This example takes a similar example but on random data and compares the processing time required by each option to compute predictions. .. contents:: :local: Training a pipeline +++++++++++++++++++ .. GENERATED FROM PYTHON SOURCE LINES 21-50 .. code-block:: default import numpy from pandas import DataFrame from tqdm import tqdm from sklearn import config_context from sklearn.datasets import make_regression from sklearn.ensemble import ( GradientBoostingRegressor, RandomForestRegressor, VotingRegressor) from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from mlprodict.onnxrt import OnnxInference from onnxruntime import InferenceSession from skl2onnx import to_onnx from skl2onnx.tutorial import measure_time N = 11000 X, y = make_regression(N, n_features=10) X_train, X_test, y_train, y_test = train_test_split( X, y, train_size=0.01) print("Train shape", X_train.shape) print("Test shape", X_test.shape) reg1 = GradientBoostingRegressor(random_state=1) reg2 = RandomForestRegressor(random_state=1) reg3 = LinearRegression() ereg = VotingRegressor([('gb', reg1), ('rf', reg2), ('lr', reg3)]) ereg.fit(X_train, y_train) .. rst-class:: sphx-glr-script-out .. code-block:: none Train shape (110, 10) Test shape (10890, 10) .. raw:: html
VotingRegressor(estimators=[('gb', GradientBoostingRegressor(random_state=1)),
                                ('rf', RandomForestRegressor(random_state=1)),
                                ('lr', LinearRegression())])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


.. GENERATED FROM PYTHON SOURCE LINES 51-60 Measure the processing time +++++++++++++++++++++++++++ We use function :func:`skl2onnx.tutorial.measure_time`. The page about `assume_finite `_ may be useful if you need to optimize the prediction. We measure the processing time per observation whether or not an observation belongs to a batch or is a single one. .. GENERATED FROM PYTHON SOURCE LINES 60-77 .. code-block:: default sizes = [(1, 50), (10, 50), (1000, 10), (10000, 5)] with config_context(assume_finite=True): obs = [] for batch_size, repeat in tqdm(sizes): context = {"ereg": ereg, 'X': X_test[:batch_size]} mt = measure_time( "ereg.predict(X)", context, div_by_number=True, number=10, repeat=repeat) mt['size'] = context['X'].shape[0] mt['mean_obs'] = mt['average'] / mt['size'] obs.append(mt) df_skl = DataFrame(obs) df_skl .. rst-class:: sphx-glr-script-out .. code-block:: none 0%| | 0/4 [00:00
average deviation min_exec max_exec repeat number size mean_obs
0 0.047938 0.000205 0.047669 0.048882 50 10 1 0.047938
1 0.047066 0.000233 0.046815 0.048085 50 10 10 0.004707
2 0.072965 0.004347 0.067796 0.079429 10 10 1000 0.000073
3 0.245015 0.000774 0.243960 0.245910 5 10 10000 0.000025


.. GENERATED FROM PYTHON SOURCE LINES 78-79 Graphe. .. GENERATED FROM PYTHON SOURCE LINES 79-83 .. code-block:: default df_skl.set_index('size')[['mean_obs']].plot( title="scikit-learn", logx=True, logy=True) .. image-sg:: /auto_tutorial/images/sphx_glr_plot_bbegin_measure_time_001.png :alt: scikit-learn :srcset: /auto_tutorial/images/sphx_glr_plot_bbegin_measure_time_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 84-89 ONNX runtime ++++++++++++ The same is done with the two ONNX runtime available. .. GENERATED FROM PYTHON SOURCE LINES 89-127 .. code-block:: default onx = to_onnx(ereg, X_train[:1].astype(numpy.float32), target_opset=14) sess = InferenceSession(onx.SerializeToString()) oinf = OnnxInference(onx, runtime="python_compiled") obs = [] for batch_size, repeat in tqdm(sizes): # scikit-learn context = {"ereg": ereg, 'X': X_test[:batch_size].astype(numpy.float32)} mt = measure_time( "ereg.predict(X)", context, div_by_number=True, number=10, repeat=repeat) mt['size'] = context['X'].shape[0] mt['skl'] = mt['average'] / mt['size'] # onnxruntime context = {"sess": sess, 'X': X_test[:batch_size].astype(numpy.float32)} mt2 = measure_time( "sess.run(None, {'X': X})[0]", context, div_by_number=True, number=10, repeat=repeat) mt['ort'] = mt2['average'] / mt['size'] # mlprodict context = {"oinf": oinf, 'X': X_test[:batch_size].astype(numpy.float32)} mt2 = measure_time( "oinf.run({'X': X})['variable']", context, div_by_number=True, number=10, repeat=repeat) mt['pyrt'] = mt2['average'] / mt['size'] # end obs.append(mt) df = DataFrame(obs) df .. rst-class:: sphx-glr-script-out .. code-block:: none 0%| | 0/4 [00:00
average deviation min_exec max_exec repeat number size skl ort pyrt
0 0.048880 0.000555 0.048377 0.051552 50 10 1 0.048880 0.000184 0.011666
1 0.047796 0.000096 0.047586 0.048032 50 10 10 0.004780 0.000075 0.001997
2 0.070935 0.001939 0.069122 0.073992 10 10 1000 0.000071 0.000011 0.000346
3 0.245143 0.000063 0.245053 0.245219 5 10 10000 0.000025 0.000006 0.000198


.. GENERATED FROM PYTHON SOURCE LINES 128-129 Graph. .. GENERATED FROM PYTHON SOURCE LINES 129-134 .. code-block:: default df.set_index('size')[['skl', 'ort', 'pyrt']].plot( title="Average prediction time per runtime", logx=True, logy=True) .. image-sg:: /auto_tutorial/images/sphx_glr_plot_bbegin_measure_time_002.png :alt: Average prediction time per runtime :srcset: /auto_tutorial/images/sphx_glr_plot_bbegin_measure_time_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 135-141 :epkg:`ONNX` runtimes are much faster than :epkg:`scikit-learn` to predict one observation. :epkg:`scikit-learn` is optimized for training, for batch prediction. That explains why :epkg:`scikit-learn` and ONNX runtimes seem to converge for big batches. They use similar implementation, parallelization and languages (:epkg:`C++`, :epkg:`openmp`). .. rst-class:: sphx-glr-timing **Total running time of the script:** ( 5 minutes 0.285 seconds) .. _sphx_glr_download_auto_tutorial_plot_bbegin_measure_time.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_bbegin_measure_time.py ` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_bbegin_measure_time.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_