Deploy machine learned models with ONNX

Links: notebook, html, python, slides, slides(2), GitHub

Xavier Dupré - Senior Data Scientist at Microsoft - Computer Science Teacher at ENSAE

Most of machine learning libraries are optimized to train models and not necessarily to use them for fast predictions in online web services. ONNX is one solution started last year by Microsoft and Facebook. This presentation describes the concept and shows some examples with scikit-learn and ML.net.

La plupart des libraires de machine learning sont optimisées pour entraîner des modèles et pas nécessairement les utiliser dans des sites internet online où l’exigence de rapidité est importante. ONNX, une initiative open source proposée l’année dernière par Microsoft et Facebook est une réponse à ce problème. Ce talk illustrera ce concept avec un démo mêlant deep learning, scikit-learn et ML.net, la librairie de machine learning open source écrite en C# et développée par Microsoft.

from jyquickhelper import add_notebook_menu
add_notebook_menu(last_level=2)
from pyquickhelper.helpgen import NbImage

Open source tools in this talk

import keras, lightgbm, onnx, skl2onnx, onnxruntime, sklearn, torch, xgboost
mods = [keras, lightgbm, onnx, skl2onnx, onnxruntime, sklearn, torch, xgboost]
for m in mods:
    print(m.__name__, m.__version__)
Using TensorFlow backend.
keras 2.2.4
lightgbm 2.2.2
onnx 1.4.1
skl2onnx 1.4.3
onnxruntime 0.3.0
sklearn 0.21.dev0
torch 1.0.1
xgboost 0.81

ML.net

  • Open source in 2018
  • ML.net
  • Machine learning library written in C#
  • Used in many places in Microsoft Services (Bing, …)
  • Working on it for three years
NbImage("mlnet.png", width=500)
../_images/onnx_deploy_8_0.png

onnx

  • Serialisation library specialized for machine learning based on Google.Protobuf
  • Open source in 2017
  • onnx
NbImage("onnx.png", width=500)
../_images/onnx_deploy_10_0.png

sklearn-learn

  • Open source in 2018
  • Converters for scikit-learn models
  • sklearn-onnx
NbImage("sklearn-onnx.png")
../_images/onnx_deploy_12_0.png

onnxruntime

NbImage("onnxruntime.png", width=400)
../_images/onnx_deploy_14_0.png

The problem about deployment

Learn and predict

  • Two different purposes not necessarily aligned for optimization
  • Learn : computation optimized for large number of observations (batch prediction)
  • Predict : computation optimized for one observation (one-off prediction)
  • Machine learning libraries optimize the learn scenario.

Illustration with a linear regression

We consider a datasets available in scikit-learn: diabetes

measures_lr = []
from sklearn.datasets import load_diabetes
diabetes = load_diabetes()
diabetes_X_train = diabetes.data[:-20]
diabetes_X_test  = diabetes.data[-20:]
diabetes_y_train = diabetes.target[:-20]
diabetes_y_test  = diabetes.target[-20:]
diabetes_X_train[:1]
array([[ 0.03807591,  0.05068012,  0.06169621,  0.02187235, -0.0442235 ,
        -0.03482076, -0.04340085, -0.00259226,  0.01990842, -0.01764613]])

scikit-learn

from sklearn.linear_model import LinearRegression
clr = LinearRegression()
clr.fit(diabetes_X_train, diabetes_y_train)
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False)
clr.predict(diabetes_X_test[:1])
array([197.61846908])
from jupytalk.benchmark import timeexec
measures_lr += [timeexec("sklearn",
                         "clr.predict(diabetes_X_test[:1])",
                         context=globals())]
Average: 39.71 µs deviation 9.99 µs (with 50 runs) in [33.11 µs, 62.40 µs]

pure python

def python_prediction(X, coef, intercept):
    s = intercept
    for a, b in zip(X, coef):
        s += a * b
    return s

python_prediction(diabetes_X_test[0], clr.coef_, clr.intercept_)
197.61846907503298
measures_lr += [timeexec("python", "python_prediction(diabetes_X_test[0], clr.coef_, clr.intercept_)",
                         context=globals())]
Average: 5.56 µs deviation 2.27 µs (with 50 runs) in [3.97 µs, 10.37 µs]

Summary

import pandas
df = pandas.DataFrame(data=measures_lr)
df = df.set_index("legend").sort_values("average")
df
average code deviation first first3 last3 max5 min5 repeat run
legend
python 0.000006 python_prediction(diabetes_X_test[0], clr.coef... 0.000002 0.000011 0.000010 0.000004 0.000010 0.000004 200 50
sklearn 0.000040 clr.predict(diabetes_X_test[:1]) 0.000010 0.000062 0.000058 0.000038 0.000062 0.000033 200 50
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1, figsize=(10,3))
df[["average", "deviation"]].plot(kind="barh", logx=True, ax=ax, xerr="deviation",
                                  legend=False, fontsize=12, width=0.8)
ax.set_ylabel("")
ax.grid(b=True, which="major")
ax.grid(b=True, which="minor")
ax.set_title("Prediction time for one observation\nLinear Regression");
../_images/onnx_deploy_30_0.png

Illustration with a random forest

measures_rf = []

scikit-learn

from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(n_estimators=10)
rf.fit(diabetes_X_train, diabetes_y_train)
RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=None,
                      max_features='auto', max_leaf_nodes=None,
                      min_impurity_decrease=0.0, min_impurity_split=None,
                      min_samples_leaf=1, min_samples_split=2,
                      min_weight_fraction_leaf=0.0, n_estimators=10,
                      n_jobs=None, oob_score=False, random_state=None,
                      verbose=0, warm_start=False)
measures_rf += [timeexec("sklearn", "rf.predict(diabetes_X_test[:1])",
                         context=globals())]
Average: 657.29 µs deviation 127.10 µs (with 50 runs) in [569.08 µs, 899.17 µs]

XGBoost

from xgboost import XGBRegressor
xg = XGBRegressor(n_estimators=10)
xg.fit(diabetes_X_train, diabetes_y_train)
XGBRegressor(base_score=0.5, booster='gbtree', colsample_bylevel=1,
             colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
             max_depth=3, min_child_weight=1, missing=None, n_estimators=10,
             n_jobs=1, nthread=None, objective='reg:linear', random_state=0,
             reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
             silent=True, subsample=1)
measures_rf += [timeexec("xgboost", "xg.predict(diabetes_X_test[:1])",
                         context=globals())]
Average: 80.28 µs deviation 24.19 µs (with 50 runs) in [64.28 µs, 139.03 µs]

LightGBM

from lightgbm import LGBMRegressor
lg = LGBMRegressor(n_estimators=10)
lg.fit(diabetes_X_train, diabetes_y_train)
LGBMRegressor(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
              importance_type='split', learning_rate=0.1, max_depth=-1,
              min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
              n_estimators=10, n_jobs=-1, num_leaves=31, objective=None,
              random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
              subsample=1.0, subsample_for_bin=200000, subsample_freq=0)
measures_rf += [timeexec("lightgbm", "lg.predict(diabetes_X_test[:1])",
                         context=globals())]
Average: 113.34 µs deviation 18.61 µs (with 50 runs) in [99.13 µs, 155.91 µs]

pure python

This would require to reimplement the prediction function.

Summary

df = pandas.DataFrame(data=measures_rf)
df = df.set_index("legend").sort_values("average")
df
average code deviation first first3 last3 max5 min5 repeat run
legend
xgboost 0.000080 xg.predict(diabetes_X_test[:1]) 0.000024 0.000203 0.000183 0.000066 0.000139 0.000064 200 50
lightgbm 0.000113 lg.predict(diabetes_X_test[:1]) 0.000019 0.000219 0.000180 0.000106 0.000156 0.000099 200 50
sklearn 0.000657 rf.predict(diabetes_X_test[:1]) 0.000127 0.000978 0.000781 0.000591 0.000899 0.000569 200 50
fig, ax = plt.subplots(1, 1, figsize=(10,3))
df[["average", "deviation"]].plot(kind="barh", logx=True, ax=ax, xerr="deviation",
                                  legend=False, fontsize=12, width=0.8)
ax.set_ylabel("")
ax.grid(b=True, which="major")
ax.grid(b=True, which="minor")
ax.set_title("Prediction time for one observation\nRandom Forest (10 trees)");
../_images/onnx_deploy_45_0.png

Keep in mind

  • Trained trees are not necessarily the same.
  • Performance is not compared.
  • Order of magnitude is important here.

What is batch prediction?

  • Instead of running N times 1 prediction
  • We run 1 time N predictions
import numpy
memo = []
batch = [1, 2, 5, 7, 8, 10, 100, 200, 500, 1000, 2000,
         3000, 4000, 5000, 10000, 20000, 50000,
         100000, 200000, 400000, ]

number = 10
repeat = 10
for i in batch:
    if i <= diabetes_X_test.shape[0]:
        mx = diabetes_X_test[:i]
    else:
        mxs = [diabetes_X_test] * (i // diabetes_X_test.shape[0] + 1)
        mx = numpy.vstack(mxs)
        mx = mx[:i]

    print("batch", "=", i)
    number = 10 if i <= 10000 else 2

    memo.append(timeexec("sklearn %d" % i, "rf.predict(mx)",
                         context=globals(), number=number, repeat=repeat))
    memo[-1]["batch"] = i
    memo[-1]["lib"] = "sklearn"

    memo.append(timeexec("xgboost %d" % i, "xg.predict(mx)",
                         context=globals(), number=number, repeat=repeat))
    memo[-1]["batch"] = i
    memo[-1]["lib"] = "xgboost"

    memo.append(timeexec("lightgbm %d" % i, "lg.predict(mx)",
                         context=globals(), number=number, repeat=repeat))
    memo[-1]["batch"] = i
    memo[-1]["lib"] = "lightgbm"
batch = 1
Average: 778.22 µs deviation 350.24 µs (with 10 runs) in [573.39 µs, 1.75 ms]
Average: 80.04 µs deviation 32.68 µs (with 10 runs) in [65.34 µs, 177.07 µs]
Average: 124.20 µs deviation 30.37 µs (with 10 runs) in [102.40 µs, 183.74 µs]
batch = 2
Average: 659.89 µs deviation 107.01 µs (with 10 runs) in [547.40 µs, 859.77 µs]
Average: 76.50 µs deviation 22.92 µs (with 10 runs) in [62.42 µs, 143.92 µs]
Average: 116.67 µs deviation 29.21 µs (with 10 runs) in [100.86 µs, 201.09 µs]
batch = 5
Average: 732.97 µs deviation 161.30 µs (with 10 runs) in [577.22 µs, 992.75 µs]
Average: 83.75 µs deviation 19.03 µs (with 10 runs) in [65.54 µs, 121.88 µs]
Average: 111.03 µs deviation 9.54 µs (with 10 runs) in [101.93 µs, 128.36 µs]
batch = 7
Average: 633.36 µs deviation 90.96 µs (with 10 runs) in [575.60 µs, 897.46 µs]
Average: 80.32 µs deviation 19.27 µs (with 10 runs) in [64.08 µs, 131.44 µs]
Average: 123.38 µs deviation 31.67 µs (with 10 runs) in [106.07 µs, 217.20 µs]
batch = 8
Average: 643.51 µs deviation 80.76 µs (with 10 runs) in [552.29 µs, 840.77 µs]
Average: 86.14 µs deviation 31.04 µs (with 10 runs) in [64.95 µs, 165.17 µs]
Average: 187.32 µs deviation 65.15 µs (with 10 runs) in [110.54 µs, 299.54 µs]
batch = 10
Average: 679.90 µs deviation 213.40 µs (with 10 runs) in [572.68 µs, 1.32 ms]
Average: 79.74 µs deviation 24.83 µs (with 10 runs) in [65.03 µs, 143.57 µs]
Average: 134.64 µs deviation 29.56 µs (with 10 runs) in [105.60 µs, 196.27 µs]
batch = 100
Average: 733.99 µs deviation 73.73 µs (with 10 runs) in [669.94 µs, 944.91 µs]
Average: 106.73 µs deviation 16.31 µs (with 10 runs) in [94.58 µs, 151.11 µs]
Average: 216.86 µs deviation 10.10 µs (with 10 runs) in [204.44 µs, 234.55 µs]
batch = 200
Average: 857.99 µs deviation 162.41 µs (with 10 runs) in [696.65 µs, 1.19 ms]
Average: 130.84 µs deviation 25.11 µs (with 10 runs) in [115.56 µs, 202.39 µs]
Average: 343.45 µs deviation 27.91 µs (with 10 runs) in [307.40 µs, 405.81 µs]
batch = 500
Average: 920.21 µs deviation 90.01 µs (with 10 runs) in [817.38 µs, 1.09 ms]
Average: 225.77 µs deviation 20.71 µs (with 10 runs) in [200.65 µs, 260.74 µs]
Average: 787.85 µs deviation 132.57 µs (with 10 runs) in [659.71 µs, 1.07 ms]
batch = 1000
Average: 1.13 ms deviation 172.74 µs (with 10 runs) in [1.00 ms, 1.61 ms]
Average: 373.52 µs deviation 27.65 µs (with 10 runs) in [341.02 µs, 432.67 µs]
Average: 1.31 ms deviation 111.23 µs (with 10 runs) in [1.15 ms, 1.60 ms]
batch = 2000
Average: 1.67 ms deviation 262.93 µs (with 10 runs) in [1.37 ms, 2.14 ms]
Average: 887.25 µs deviation 150.25 µs (with 10 runs) in [660.07 µs, 1.21 ms]
Average: 2.69 ms deviation 482.48 µs (with 10 runs) in [2.16 ms, 3.84 ms]
batch = 3000
Average: 1.93 ms deviation 188.76 µs (with 10 runs) in [1.72 ms, 2.33 ms]
Average: 1.03 ms deviation 181.46 µs (with 10 runs) in [889.08 µs, 1.51 ms]
Average: 3.80 ms deviation 350.02 µs (with 10 runs) in [3.33 ms, 4.52 ms]
batch = 4000
Average: 2.44 ms deviation 326.18 µs (with 10 runs) in [2.16 ms, 3.37 ms]
Average: 1.28 ms deviation 89.92 µs (with 10 runs) in [1.20 ms, 1.50 ms]
Average: 4.84 ms deviation 476.40 µs (with 10 runs) in [4.31 ms, 6.06 ms]
batch = 5000
Average: 3.41 ms deviation 338.66 µs (with 10 runs) in [3.01 ms, 4.22 ms]
Average: 1.63 ms deviation 90.44 µs (with 10 runs) in [1.47 ms, 1.81 ms]
Average: 6.01 ms deviation 611.14 µs (with 10 runs) in [5.31 ms, 7.58 ms]
batch = 10000
Average: 4.65 ms deviation 270.13 µs (with 10 runs) in [4.27 ms, 5.14 ms]
Average: 3.47 ms deviation 383.13 µs (with 10 runs) in [3.13 ms, 4.43 ms]
Average: 12.30 ms deviation 1.48 ms (with 10 runs) in [11.00 ms, 15.75 ms]
batch = 20000
Average: 11.14 ms deviation 2.82 ms (with 2 runs) in [7.80 ms, 15.50 ms]
Average: 8.18 ms deviation 1.13 ms (with 2 runs) in [7.10 ms, 10.25 ms]
Average: 23.03 ms deviation 2.26 ms (with 2 runs) in [20.92 ms, 28.80 ms]
batch = 50000
Average: 21.17 ms deviation 1.81 ms (with 2 runs) in [19.47 ms, 26.07 ms]
Average: 19.18 ms deviation 1.97 ms (with 2 runs) in [16.30 ms, 23.17 ms]
Average: 57.50 ms deviation 5.75 ms (with 2 runs) in [52.40 ms, 70.57 ms]
batch = 100000
Average: 39.91 ms deviation 1.61 ms (with 2 runs) in [38.19 ms, 42.82 ms]
Average: 37.48 ms deviation 2.35 ms (with 2 runs) in [35.66 ms, 44.29 ms]
Average: 111.02 ms deviation 13.31 ms (with 2 runs) in [103.85 ms, 150.29 ms]
batch = 200000
Average: 101.25 ms deviation 8.73 ms (with 2 runs) in [92.92 ms, 122.00 ms]
Average: 76.49 ms deviation 4.35 ms (with 2 runs) in [72.31 ms, 87.45 ms]
Average: 243.44 ms deviation 25.21 ms (with 2 runs) in [212.50 ms, 289.54 ms]
batch = 400000
Average: 197.53 ms deviation 5.80 ms (with 2 runs) in [189.51 ms, 211.09 ms]
Average: 170.33 ms deviation 31.31 ms (with 2 runs) in [145.31 ms, 233.46 ms]
Average: 440.61 ms deviation 45.24 ms (with 2 runs) in [409.00 ms, 572.76 ms]
dfb = pandas.DataFrame(memo)[["average", "lib", "batch"]]
piv = dfb.pivot("batch", "lib", "average")
for c in piv.columns:
    piv["ave_" + c] = piv[c] / piv.index
libs = list(c for c in piv.columns if "ave_" in c)
ax = piv.plot(y=libs, logy=True, logx=True, figsize=(10, 5))
ax.set_title("Computation time per observation when computed in a batch")
ax.set_ylabel("sec")
ax.set_xlabel("batch size")
ax.grid(True);
../_images/onnx_deploy_49_0.png

ONNX

ONNX = language to describe models

  • Standard format to describe machine learning
  • Easier to exchange, export

ONNX = machine learning oriented

Can represent any mathematical function handling numerical and text features.

NbImage("onnxop.png", width=600)
../_images/onnx_deploy_53_0.png

actively supported

  • Microsoft
  • Facebook
  • first created to deploy deep learning models
  • extended to other models

Train somewhere, predict somewhere else

Cannot optimize the code for both training and predicting.

Training Predicting
Batch prediction One-off prediction
Huge memory Small memory
Huge data Small data
. High latency

Libraries for predictions

  • Optimized for predictions
  • Optimized for a device

ONNX Runtime

ONNX Runtime for inferencing machine learning models now in preview

Dedicated runtime for:

  • CPU
  • GPU
NbImage("onnxrt.png", width=800)
../_images/onnx_deploy_59_0.png

ONNX demo on random forest

rf
RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=None,
                      max_features='auto', max_leaf_nodes=None,
                      min_impurity_decrease=0.0, min_impurity_split=None,
                      min_samples_leaf=1, min_samples_split=2,
                      min_weight_fraction_leaf=0.0, n_estimators=10,
                      n_jobs=None, oob_score=False, random_state=None,
                      verbose=0, warm_start=False)

Conversion to ONNX

onnxmltools

from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import FloatTensorType
model_onnx = convert_sklearn(rf, "rf_diabetes",
                             [('input', FloatTensorType([1, 10]))])
The maximum opset needed by this model is only 1.
print(str(model_onnx)[:450] + "\n...")
ir_version: 4
producer_name: "skl2onnx"
producer_version: "1.4.3"
domain: "ai.onnx"
model_version: 0
doc_string: ""
graph {
  node {
    input: "input"
    output: "variable"
    name: "TreeEnsembleRegressor"
    op_type: "TreeEnsembleRegressor"
    attribute {
      name: "n_targets"
      i: 1
      type: INT
    }
    attribute {
      name: "nodes_falsenodeids"
      ints: 340
      ints: 245
      ints: 110
      ints: 7
      ints: 6
...

Save the model

def save_model(model, filename):
    with open(filename, "wb") as f:
        f.write(model.SerializeToString())

save_model(model_onnx, 'rf_sklearn.onnx')

Computes predictions

import onnxruntime

sess = onnxruntime.InferenceSession("rf_sklearn.onnx")

for i in sess.get_inputs():
    print('Input:', i)
for o in sess.get_outputs():
    print('Output:', o)
Input: NodeArg(name='input', type='tensor(float)', shape=[1, 10])
Output: NodeArg(name='variable', type='tensor(float)', shape=[1, 1])
import numpy

def predict_onnxrt(x):
    return sess.run(["variable"], {'input': x})

print("Prediction:", predict_onnxrt(diabetes_X_test[:1].astype(numpy.float32)))
Prediction: [array([[216.00003]], dtype=float32)]
measures_rf += [timeexec("onnx", "predict_onnxrt(diabetes_X_test[:1].astype(numpy.float32))",
                         context=globals())]
Average: 22.00 µs deviation 10.00 µs (with 50 runs) in [16.67 µs, 39.54 µs]
fig, ax = plt.subplots(1, 1, figsize=(10,3))
df = pandas.DataFrame(data=measures_rf)
df = df.set_index("legend").sort_values("average")
df[["average", "deviation"]].plot(kind="barh", logx=True, ax=ax, xerr="deviation",
                                  legend=False, fontsize=12, width=0.8)
ax.set_ylabel("")
ax.grid(b=True, which="major")
ax.grid(b=True, which="minor")
ax.set_title("Prediction time for one observation\nRandom Forest (10 trees)");
../_images/onnx_deploy_72_0.png

Deep learning

  • transfer learning with keras
  • orther convert pytorch, caffee…
measures_dl = []
from keras.applications.mobilenetv2 import MobileNetV2
model = MobileNetV2(input_shape=None, alpha=1.0, include_top=True,
                    weights='imagenet', input_tensor=None,
                    pooling=None, classes=1000)
model
<keras.engine.training.Model at 0x1d283dade80>
from pyensae.datasource import download_data
import os
if not os.path.exists("simages/noclass"):
    os.makedirs("simages/noclass")
images = download_data("dog-cat-pixabay.zip",
                       whereTo="simages/noclass")
from mlinsights.plotting import plot_gallery_images
plot_gallery_images(images[:7]);
../_images/onnx_deploy_77_0.png
from keras.preprocessing.image import ImageDataGenerator
import numpy
params = dict(rescale=1./255)
augmenting_datagen = ImageDataGenerator(**params)
flow = augmenting_datagen.flow_from_directory('simages', batch_size=1, target_size=(224, 224),
                                              classes=['noclass'], shuffle=False)
imgs = [img[0][0] for i, img in zip(range(0,31), flow)]
Found 31 images belonging to 1 classes.
array_images = [im[numpy.newaxis, :, :, :] for im in imgs]
array_images[0].shape
(1, 224, 224, 3)
outputs = [model.predict(im) for im in array_images]
outputs[0].shape
(1, 1000)
outputs[0].ravel()[:10]
array([3.5999357e-04, 1.2039350e-03, 1.2471760e-04, 6.1937186e-05,
       1.1310327e-03, 1.7601112e-04, 1.9819068e-04, 1.4307768e-04,
       5.5190694e-04, 1.7074044e-04], dtype=float32)

Let’s measure time.

from jupytalk.benchmark import timeexec
measures_dl += [timeexec("keras.mobilenet", "model.predict(array_images[0])",
                         context=globals(), repeat=3, number=10)]
Average: 106.48 ms deviation 7.99 ms (with 10 runs) in [98.62 ms, 117.44 ms]
from keras2onnx import convert_keras
try:
    konnx = convert_keras(model, "mobilev2")
except ValueError as e:
    # keras updated its version on
    print(e)
INFO:tensorflow:Froze 262 variables.
INFO:tensorflow:Froze 262 variables.
INFO:tensorflow:Converted 262 variables to const ops.
INFO:tensorflow:Converted 262 variables to const ops.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.

Let’s switch to pytorch.

import torchvision.models as models
modelt = models.squeezenet1_1(pretrained=True)
modelt.classifier
Sequential(
  (0): Dropout(p=0.5)
  (1): Conv2d(512, 1000, kernel_size=(1, 1), stride=(1, 1))
  (2): ReLU(inplace)
  (3): AdaptiveAvgPool2d(output_size=(1, 1))
)
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
trans = transforms.Compose([transforms.Resize((224, 224)),
                            transforms.CenterCrop(224),
                            transforms.ToTensor()])
imgs = datasets.ImageFolder("simages", trans)
dataloader = DataLoader(imgs, batch_size=1, shuffle=False, num_workers=1)
img_seq = iter(dataloader)
imgs = list(img[0] for img in img_seq)
all_outputs = [modelt.forward(img).detach().numpy().ravel() for img in imgs[:2]]
all_outputs[0].shape
(1000,)
measures_dl += [timeexec("pytorch.squeezenet", "modelt.forward(imgs[0]).detach().numpy().ravel()",
                         context=globals(), repeat=3, number=10)]
Average: 75.40 ms deviation 2.90 ms (with 10 runs) in [71.34 ms, 77.93 ms]

Let’s convert into ONNX.

import torch.onnx
from torch.autograd import Variable
input_names = [ "actual_input_1" ]
output_names = [ "output1" ]
dummy_input = Variable(torch.randn(10, 3, 224, 224))

try:
    torch.onnx.export(modelt, dummy_input, "resnet18.onnx", verbose=False,
                      input_names=input_names, output_names=output_names)
except Exception as e:
    print(str(e).split('\n')[0])
c:python370_x64libsite-packagestorchonnxsymbolic.py:131: UserWarning: ONNX export failed on max_pool2d_with_indices because ceil_mode not supported
  warnings.warn("ONNX export failed on " + op + " because " + msg + " not supported")
ONNX export failed: Couldn't export operator aten::max_pool2d_with_indices

Well… work in progress.

Model zoo

Converted Models

NbImage("zoo.png", width=800)
../_images/onnx_deploy_95_0.png

MobileNet and SqueezeNet

Download a pre-converted version MobileNetv2

download_data("mobilenetv2-1.0.onnx",
              url="https://s3.amazonaws.com/onnx-model-zoo/mobilenet/mobilenetv2-1.0/")
'mobilenetv2-1.0.onnx'
sess = onnxruntime.InferenceSession("mobilenetv2-1.0.onnx")
for i in sess.get_inputs():
    print('Input:', i)
for o in sess.get_outputs():
    print('Output:', o)
Input: NodeArg(name='data', type='tensor(float)', shape=[1, 3, 224, 224])
Output: NodeArg(name='mobilenetv20_output_flatten0_reshape0', type='tensor(float)', shape=[1, 1000])
print(array_images[0].shape)
print(array_images[0].transpose((0, 3, 1, 2)).shape)
(1, 224, 224, 3)
(1, 3, 224, 224)
res = sess.run(None, {'data': array_images[0].transpose((0, 3, 1, 2))})
res[0].shape
(1, 1000)
measures_dl += [timeexec("onnx.mobile", "sess.run(None, {'data': array_images[0].transpose((0, 3, 1, 2))})",
                         context=globals(), repeat=3, number=10)]
Average: 91.00 ms deviation 25.79 ms (with 10 runs) in [69.28 ms, 127.23 ms]

Download a pre-converted version SqueezeNet

download_data("squeezenet1.1.onnx",
              url="https://s3.amazonaws.com/onnx-model-zoo/squeezenet/squeezenet1.1/")
'squeezenet1.1.onnx'
sess = onnxruntime.InferenceSession("squeezenet1.1.onnx")
for i in sess.get_inputs():
    print('Input:', i)
for o in sess.get_outputs():
    print('Output:', o)
Input: NodeArg(name='data', type='tensor(float)', shape=[1, 3, 224, 224])
Output: NodeArg(name='squeezenet0_flatten0_reshape0', type='tensor(float)', shape=[1, 1000])
measures_dl += [timeexec("onnx.squeezenet", "sess.run(None, {'data': array_images[0].transpose((0, 3, 1, 2))})",
                         context=globals(), repeat=3, number=10)]
Average: 15.04 ms deviation 2.31 ms (with 10 runs) in [12.98 ms, 18.27 ms]
fig, ax = plt.subplots(1, 1, figsize=(10,3))
df = pandas.DataFrame(data=measures_dl)
df = df.set_index("legend").sort_values("average")
df[["average", "deviation"]].plot(kind="barh", logx=True, ax=ax, xerr="deviation",
                                  legend=False, fontsize=12, width=0.8)
ax.set_ylabel("")
ax.grid(b=True, which="major")
ax.grid(b=True, which="minor")
ax.set_title("Prediction time for one observation\nDeep learning models 224x224x3 (ImageNet)");
../_images/onnx_deploy_108_0.png

Tiny yolo

Source: TinyYOLOv2 on onnx

download_data("tiny_yolov2.tar.gz",
              url="https://onnxzoo.blob.core.windows.net/models/opset_8/tiny_yolov2/")
['.\tiny_yolov2/model.onnx',
 '.\tiny_yolov2/test_data_set_0/input_0.pb',
 '.\tiny_yolov2/test_data_set_0/output_0.pb',
 '.\tiny_yolov2/test_data_set_1/input_0.pb',
 '.\tiny_yolov2/test_data_set_1/output_0.pb',
 '.\tiny_yolov2/test_data_set_2/input_0.pb',
 '.\tiny_yolov2/test_data_set_2/output_0.pb']
sess = onnxruntime.InferenceSession("tiny_yolov2/model.onnx")
for i in sess.get_inputs():
    print('Input:', i)
for o in sess.get_outputs():
    print('Output:', o)
Input: NodeArg(name='image', type='tensor(float)', shape=[None, 3, 416, 416])
Output: NodeArg(name='grid', type='tensor(float)', shape=[None, 125, 13, 13])
from PIL import Image,ImageDraw
img = Image.open('Au-Salon-de-l-agriculture-la-campagne-recrute.jpg')
img
../_images/onnx_deploy_112_0.png
img2 = img.resize((416, 416))
img2
../_images/onnx_deploy_113_0.png
X = numpy.asarray(img2)
X = X.transpose(2,0,1)
X = X.reshape(1,3,416,416)

out = sess.run(None, {'image': X.astype(numpy.float32)})
out = out[0][0]
def display_yolo(img, seuil):
    import numpy as np
    numClasses = 20
    anchors = [1.08, 1.19, 3.42, 4.41, 6.63, 11.38, 9.42, 5.11, 16.62, 10.52]

    def sigmoid(x, derivative=False):
        return x*(1-x) if derivative else 1/(1+np.exp(-x))

    def softmax(x):
        scoreMatExp = np.exp(np.asarray(x))
        return scoreMatExp / scoreMatExp.sum(0)

    clut = [(0,0,0),(255,0,0),(255,0,255),(0,0,255),(0,255,0),(0,255,128),
            (128,255,0),(128,128,0),(0,128,255),(128,0,128),
            (255,0,128),(128,0,255),(255,128,128),(128,255,128),(255,255,0),
            (255,128,128),(128,128,255),(255,128,128),(128,255,128),(128,255,128)]
    label = ["aeroplane","bicycle","bird","boat","bottle",
             "bus","car","cat","chair","cow","diningtable",
             "dog","horse","motorbike","person","pottedplant",
             "sheep","sofa","train","tvmonitor"]

    draw = ImageDraw.Draw(img)
    for cy in range(0,13):
        for cx in range(0,13):
            for b in range(0,5):
                channel = b*(numClasses+5)
                tx = out[channel  ][cy][cx]
                ty = out[channel+1][cy][cx]
                tw = out[channel+2][cy][cx]
                th = out[channel+3][cy][cx]
                tc = out[channel+4][cy][cx]

                x = (float(cx) + sigmoid(tx))*32
                y = (float(cy) + sigmoid(ty))*32

                w = np.exp(tw) * 32 * anchors[2*b  ]
                h = np.exp(th) * 32 * anchors[2*b+1]

                confidence = sigmoid(tc)

                classes = np.zeros(numClasses)
                for c in range(0,numClasses):
                    classes[c] = out[channel + 5 +c][cy][cx]
                    classes = softmax(classes)
                detectedClass = classes.argmax()

                if seuil < classes[detectedClass]*confidence:
                    color =clut[detectedClass]
                    x = x - w/2
                    y = y - h/2
                    draw.line((x  ,y  ,x+w,y ),fill=color, width=3)
                    draw.line((x  ,y  ,x  ,y+h),fill=color, width=3)
                    draw.line((x+w,y  ,x+w,y+h),fill=color, width=3)
                    draw.line((x  ,y+h,x+w,y+h),fill=color, width=3)

    return img
img2 = img.resize((416, 416))
display_yolo(img2, 0.038)
../_images/onnx_deploy_116_0.png

Conclusion

  • ONNX is a working progress, active development
  • ONNX is open source
  • ONNX does not depend on the machine learning framework
  • ONNX provides dedicated runtimes
  • ONNX is fast, available in Python…

Metadata to trace deployed models

meta = sess.get_modelmeta()
meta.description
"The Tiny YOLO network from the paper 'YOLO9000: Better, Faster, Stronger' (2016), arXiv:1612.08242"
meta.producer_name, meta.version
('WinMLTools', 0)