Tools#

ONNX#

Accessor#

mlprodict.onnx_tools.onnx_tools.find_node_input_name (node, name)

Finds a node input by its name.

mlprodict.onnx_tools.onnx_tools.find_node_name (model, name)

Finds a node by its name.

mlprodict.onnx_tools.onnx_tools.insert_node (model, op_type, node, input_index = 0, new_name = None, attrs)

Inserts a node before one node input.

Export from onnx to…#

mlprodict.onnx_tools.onnx_export.export2numpy (model_onnx, opset = None, verbose = True, name = None, rename = False, autopep_options = None)

Exports an ONNX model to the numpy syntax. The exports does not work with all operators.

mlprodict.onnx_tools.onnx_export.export2onnx (model_onnx, opset = None, verbose = True, name = None, rename = False, autopep_options = None)

Exports an ONNX model to the onnx syntax.

mlprodict.onnx_tools.onnx_export.export2python (model_onnx, opset = None, verbose = True, name = None, rename = False, autopep_options = None, function_name = ‘main’)

Exports an ONNX model to the python syntax.

mlprodict.onnx_tools.onnx_export.export2tf2onnx (model_onnx, opset = None, verbose = True, name = None, rename = False, autopep_options = None)

Exports an ONNX model to the tensorflow-onnx syntax.

mlprodict.onnx_tools.onnx_export.export2xop (model_onnx, opset = None, verbose = True, name = None, rename = False, autopep_options = None)

Exports an ONNX model to the XOP syntax.

Graphs helper, manipulations#

Functions to help understand models or modify them.

mlprodict.tools.model_info.analyze_model (model, simplify = True)

Returns informations, statistics about a model, its number of nodes, its size…

mlprodict.onnx_tools.onnx_manipulations.change_input_type (onx, changes)

Changes the type of an input.

mlprodict.onnx_tools.onnx_manipulations.change_subgraph_io_type

mlprodict.onnx_tools.compress.compress_proto (proto, verbose = 0)

Compresses a ModelProto, FunctionProto, :epkg:`GraphProto`. The function detects nodes outputting results only used once. It then fuses it with the node taking it as an input.

mlprodict.onnx_tools.onnx_manipulations.insert_results_into_onnx (model, results, as_parameter = True, suffix = ‘_DBG’, param_name = None, node_type = ‘DEBUG’, domain = ‘DEBUG’, domain_opset = 1)

Inserts results into an ONNX graph to produce an extended ONNX graph. It can be saved and looked into with a tool such as netron.

mlprodict.onnx_tools.onnx_manipulations.enumerate_model_node_outputs (model, add_node = False, order = False)

Enumerates all the nodes of a model.

mlprodict.onnx_tools.onnx_tools.enumerate_onnx_names (onx)

Enumerates all existing names in one ONNX graph (ModelProto, FunctionProto, :epkg:`GraphProto`). The function is recursive.

mlprodict.tools.code_helper.make_callable (fct, obj, code, gl, debug)

Creates a callable function able to cope with default values as the combination of functions compile and exec does not seem able to take them into account.

mlprodict.onnx_tools.onnx_manipulations.onnx_function_to_model (onx, functions = None, type_info = None, as_function = False, shape_fct = None)

Converts an ONNX FunctionProto into a ModelProto. The function does not handle attributes yet.

mlprodict.onnx_tools.onnx_manipulations.onnx_inline_function (obj, protos = None, existing_names = None, verbose = 0, fLOG = None)

Inlines functions in an ONNX graph.

mlprodict.onnx_tools.onnx_manipulations.onnx_model_to_function (onx, name = None, domain = ‘custom’, opset_imports = None, doc_string = None, inputs2par = None)

Converts an ONNX model into a function. The returned function has no attribute.

mlprodict.onnx_tools.onnx_manipulations.onnx_rename_inputs_outputs (onx, rename)

Renames input or outputs names.

mlprodict.onnx_tools.onnx_manipulations.onnx_rename_names (model, strategy = ‘simple’, recursive = True, verbose = 0, fLOG = <built-in function print>, counts = None, replace = None, taken = None)

Renames all names except the inputs and outputs.

mlprodict.onnx_tools.onnx_manipulations.onnx_replace_functions (model, replace)

Replaces some of the function in model.

mlprodict.onnx_tools.model_checker.onnx_shaker (oinf, inputs, output_fct, n = 100, dtype = <class ‘numpy.float32’>, force = 1)

Shakes a model ONNX. Explores the ranges for every prediction. Uses astype_range

mlprodict.onnx_tools.optim.onnx_statistics (onnx_model, recursive = True, optim = True, node_type = False)

Computes statistics on ONNX models, extracts informations about the model such as the number of nodes.

mlprodict.onnx_tools.onnx_manipulations.select_model_inputs_outputs (model, outputs = None, inputs = None, infer_shapes = False, overwrite = None, remove_unused = True, verbose = 0, fLOG = None)

Takes a model and changes its outputs.

mlprodict.testing.verify_code (source, exc = True)

Verifies python code.

mlprodict.testing.script_testing.verify_script (file_or_name, try_onnx = True, existing_loc = None, options)

Checks that models fitted in an example from scikit-learn documentation can be converted into ONNX.

Onnx Optimization#

The following functions reduce the number of ONNX operators in a graph while keeping the same results. The optimized graph is left unchanged.

mlprodict.onnx_tools.onnx_tools.ensure_topological_order (inputs, initializers, nodes)

Ensures and modifies the order of nodes to have a topological order (every node in the list can only be an input for a node later in this list). The function raises an exception if a cycle is detected.

mlprodict.onnx_tools.optim.onnx_remove_node (onnx_model, recursive = True, debug_info = None, options)

Removes as many nodes as possible without changing the outcome. It applies onnx_remove_node_unused, onnx_remove_node_identity, and onnx_remove_node_redundant.

mlprodict.onnx_tools.optim.onnx_optimisations (onnx_model, recursive = True, debug_info = None, options)

Calls several possible optimisations including onnx_remove_node.

mlprodict.onnx_tools.optim.onnx_remove_node_identity (onnx_model, recursive = True, debug_info = None, options)

Removes as many Identity nodes as possible. The function looks into every node and subgraphs if recursive is True for identity node. Unless such a node directy connects one input to one output, it will be removed and every other node gets its inputs or outputs accordingly renamed.

mlprodict.onnx_tools.optim.onnx_remove_node_redundant (onnx_model, recursive = True, debug_info = None, max_hash_size = 1000, options)

Removes redundant part of the graph. A redundant part is a set of nodes which takes the same inputs and produces the same outputs. It first starts by looking into duplicated initializers, then looks into nodes taking the same inputs and sharing the same type and parameters.

mlprodict.onnx_tools.optim.onnx_remove_node_unused (onnx_model, recursive = True, debug_info = None, options)

Removes unused nodes of the graph. An unused node is not involved in the output computation.

Onnx Schemas#

mlprodict.onnx_tools.onnx2py_helper.get_onnx_schema (opname, domain = ‘’, opset = None, load_function = False)

Returns the operator schema for a specific operator.

Profiling#

mlprodict.tools.ort_wrapper.prepare_c_profiling (model_onnx, inputs, dest = None)

Prepares model and data to be profiled with tool perftest (onnxruntime) or onnx_test_runner. It saves the model in folder dest and dumps the inputs in a subfolder.

Serialization#

mlprodict.onnx_tools.onnx2py_helper.from_bytes (b)

Retrieves an array from bytes then protobuf.

mlprodict.onnx_tools.onnx2py_helper.to_bytes (val)

Converts an array into protobuf and then into bytes.

Validation of scikit-learn models#

mlprodict.onnxrt.validate.enumerate_validated_operator_opsets (verbose = 0, opset_min = -1, opset_max = -1, check_runtime = True, debug = False, runtime = ‘python’, models = None, dump_folder = None, store_models = False, benchmark = False, skip_models = None, assume_finite = True, node_time = False, fLOG = <built-in function print>, filter_exp = None, versions = False, extended_list = False, time_kwargs = None, dump_all = False, n_features = None, skip_long_test = True, fail_bad_results = False, filter_scenario = None, time_kwargs_fact = None, time_limit = 4, n_jobs = None)

Tests all possible configurations for all possible operators and returns the results.

mlprodict.onnx_tools.model_checker.onnx_shaker (oinf, inputs, output_fct, n = 100, dtype = <class ‘numpy.float32’>, force = 1)

Shakes a model ONNX. Explores the ranges for every prediction. Uses astype_range

mlprodict.onnxrt.validate.side_by_side.side_by_side_by_values (sessions, args, inputs = None, return_results = False, kwargs)

Compares the execution of two sessions. It calls method OnnxInference.run with value intermediate=True and compares the results.

mlprodict.onnxrt.validate.summary_report (df, add_cols = None, add_index = None)

Finalizes the results computed by function enumerate_validated_operator_opsets.

Testing#

mlprodict.testing.onnx_backend.enumerate_onnx_tests (series, fct_filter = None)

Collects test from a sub folder of onnx/backend/test. Works as an enumerator to start processing them without waiting or storing too much of them.

mlprodict.testing.onnx_backend.OnnxBackendTest (self, folder)

Definition of a backend test. It starts with a folder, in this folder, one onnx file must be there, then a subfolder for each test to run with this model.

Visualization#

Many times I had to debug and I was thinking about a way to see a graph in a text editor. That’s the goal of this function with the possibility later to only show a part of a graph.

text

mlprodict.plotting.text_plot.onnx_simple_text_plot (model, verbose = False, att_display = None, add_links = False, recursive = False, functions = True, raise_exc = True, sub_graphs_names = None, level = 1, indent = True)

Displays an ONNX graph into text.

mlprodict.plotting.text_plot.onnx_text_plot (model_onnx, recursive = False, graph_type = ‘basic’, grid = 5, distance = 5)

Uses onnx2bigraph to convert the ONNX graph into text.

mlprodict.plotting.text_plot.onnx_text_plot_tree (node)

Gives a textual representation of a tree ensemble.

drawings

mlprodict.plotting.plotting_onnx.plot_onnx (onx, ax = None, dpi = 300, temp_dot = None, temp_img = None, show = False)

Plots an ONNX graph into a matplotlib graph.

notebook

onnxview, see also Introduction to a numpy API for ONNX: FunctionTransformer.

benchmark

mlprodict.plotting.plot_validate_benchmark

mlprodict.plotting.plotting_benchmark.plot_benchmark_metrics (metric, xlabel = None, ylabel = None, middle = 1.0, transpose = False, ax = None, cbar_kw = None, cbarlabel = None, valfmt = ‘{x:.2f}x’)

Plots a heatmap which represents a benchmark. See example below.

notebook

mlprodict.nb_helper.onnxview (graph, recursive = False, local = False, add_rt_shapes = False, runtime = ‘python’, size = None, html_size = None)

Displays an ONNX graph into a notebook.

Others#

scikit-learn#

mlprodict.grammar.grammar_sklearn.sklearn2graph (model, output_names = None, kwargs)

Converts any kind of scikit-learn model into a grammar model.

Versions#

mlprodict.get_ir_version (opv)

Returns the corresponding IR_VERSION based on the selected opset. See ONNX Version.

mlprodict.__max_supported_opset__

mlprodict.__max_supported_opsets__

skl2onnx#

mlprodict.onnx_tools.exports.skl2onnx_helper.add_onnx_graph (scope, operator, container, onx)

Adds a whole ONNX graph to an existing one following skl2onnx API assuming this ONNX graph implements an operator.

Type conversion#

You should look into ONNX mappings.

mlprodict.onnx_conv.convert.guess_initial_types (X, initial_types)

Guesses initial types from an array or a dataframe.

mlprodict.onnx_tools.onnx2py_helper.guess_numpy_type_from_string (name)

Converts a string (such as ‘float’) into a numpy dtype.

mlprodict.onnx_tools.onnx2py_helper.guess_numpy_type_from_dtype (dt)

Converts a string (such as ‘dtype(float32)’) into a numpy dtype.

mlprodict.onnx_tools.onnx2py_helper.guess_proto_dtype (dtype)

Guesses the ONNX dtype given a numpy dtype.

mlprodict.onnx_tools.onnx2py_helper.guess_proto_dtype_name (onnx_dtype)

Returns a string equivalent to onnx_dtype.

mlprodict.onnx_tools.onnx2py_helper.guess_dtype (proto_type)

Converts a proto type into a numpy type.

In sklearn-onnx:

  • skl2onnx.algebra.type_helper.guess_initial_types

  • skl2onnx.common.data_types.guess_data_type

  • skl2onnx.common.data_types.guess_numpy_type

  • skl2onnx.common.data_types.guess_proto_type

  • skl2onnx.common.data_types.guess_tensor_type

  • skl2onnx.common.data_types._guess_type_proto

  • skl2onnx.common.data_types._guess_numpy_type

The last example summarizes all the possibilities.

<<<

import numpy
from onnx import TensorProto

from skl2onnx.algebra.type_helper import guess_initial_types
from skl2onnx.common.data_types import guess_data_type
from skl2onnx.common.data_types import guess_numpy_type
from skl2onnx.common.data_types import guess_proto_type
from skl2onnx.common.data_types import guess_tensor_type
from skl2onnx.common.data_types import _guess_type_proto
from skl2onnx.common.data_types import _guess_numpy_type
from skl2onnx.common.data_types import DoubleTensorType

from mlprodict.onnx_conv.convert import guess_initial_types as guess_initial_types_mlprodict
from mlprodict.onnx_tools.onnx2py_helper import guess_numpy_type_from_string
from mlprodict.onnx_tools.onnx2py_helper import guess_numpy_type_from_dtype
from mlprodict.onnx_tools.onnx2py_helper import guess_proto_dtype
from mlprodict.onnx_tools.onnx2py_helper import guess_proto_dtype_name
from mlprodict.onnx_tools.onnx2py_helper import guess_dtype


def guess_initial_types0(t):
    return guess_initial_types(numpy.array([[0, 1]], dtype=t), None)


def guess_initial_types1(t):
    return guess_initial_types(None, [('X', t)])


def guess_initial_types_mlprodict0(t):
    return guess_initial_types_mlprodict(numpy.array([[0, 1]], dtype=t), None)


def guess_initial_types_mlprodict1(t):
    return guess_initial_types_mlprodict(None, [('X', t)])


def _guess_type_proto1(t):
    return _guess_type_proto(t, [None, 4])


def _guess_numpy_type1(t):
    return _guess_numpy_type(t, [None, 4])


fcts = [guess_initial_types0, guess_initial_types1,
        guess_data_type, guess_numpy_type,
        guess_proto_type, guess_tensor_type,
        _guess_type_proto1,
        _guess_numpy_type1,
        guess_initial_types_mlprodict0,
        guess_initial_types_mlprodict1,
        guess_numpy_type_from_string,
        guess_numpy_type_from_dtype,
        guess_proto_dtype_name, guess_dtype]

values = [numpy.float64, float, 'double', 'tensor(double)',
          DoubleTensorType([None, 4]),
          TensorProto.DOUBLE]

print("---SUCCESS------------")
errors = []
for f in fcts:
    print("")
    for v in values:
        try:
            r = f(v)
            print("%s(%r) -> %r" % (f.__name__, v, r))
        except Exception as e:
            errors.append("%s(%r) -> %r" % (f.__name__, v, e))
    errors.append("")

print()
print('---ERRORS-------------')
print()
for e in errors:
    print(e)

>>>

    ---SUCCESS------------
    
    guess_initial_types0(<class 'numpy.float64'>) -> [('X', DoubleTensorType(shape=[None, 2]))]
    guess_initial_types0(<class 'float'>) -> [('X', DoubleTensorType(shape=[None, 2]))]
    guess_initial_types0('double') -> [('X', DoubleTensorType(shape=[None, 2]))]
    
    guess_initial_types1(<class 'numpy.float64'>) -> [('X', <class 'numpy.float64'>)]
    guess_initial_types1(<class 'float'>) -> [('X', <class 'float'>)]
    guess_initial_types1('double') -> [('X', 'double')]
    guess_initial_types1('tensor(double)') -> [('X', 'tensor(double)')]
    guess_initial_types1(DoubleTensorType(shape=[None, 4])) -> [('X', DoubleTensorType(shape=[None, 4]))]
    guess_initial_types1(11) -> [('X', 11)]
    
    guess_data_type('tensor(double)') -> DoubleTensorType(shape=[])
    
    guess_numpy_type(<class 'numpy.float64'>) -> <class 'numpy.float64'>
    guess_numpy_type(DoubleTensorType(shape=[None, 4])) -> <class 'numpy.float64'>
    
    guess_proto_type(DoubleTensorType(shape=[None, 4])) -> 11
    
    guess_tensor_type(DoubleTensorType(shape=[None, 4])) -> DoubleTensorType(shape=[])
    
    _guess_type_proto1(11) -> DoubleTensorType(shape=[None, 4])
    
    _guess_numpy_type1(<class 'numpy.float64'>) -> DoubleTensorType(shape=[None, 4])
    
    guess_initial_types_mlprodict0(<class 'numpy.float64'>) -> [('X', DoubleTensorType(shape=[None, 2]))]
    guess_initial_types_mlprodict0(<class 'float'>) -> [('X', DoubleTensorType(shape=[None, 2]))]
    guess_initial_types_mlprodict0('double') -> [('X', DoubleTensorType(shape=[None, 2]))]
    
    guess_initial_types_mlprodict1(<class 'numpy.float64'>) -> [('X', <class 'numpy.float64'>)]
    guess_initial_types_mlprodict1(<class 'float'>) -> [('X', <class 'float'>)]
    guess_initial_types_mlprodict1('double') -> [('X', 'double')]
    guess_initial_types_mlprodict1('tensor(double)') -> [('X', 'tensor(double)')]
    guess_initial_types_mlprodict1(DoubleTensorType(shape=[None, 4])) -> [('X', DoubleTensorType(shape=[None, 4]))]
    guess_initial_types_mlprodict1(11) -> [('X', 11)]
    
    guess_numpy_type_from_string('double') -> <class 'numpy.float64'>
    
    guess_numpy_type_from_dtype(<class 'numpy.float64'>) -> <class 'numpy.float64'>
    guess_numpy_type_from_dtype(<class 'float'>) -> <class 'numpy.float64'>
    guess_numpy_type_from_dtype('double') -> <class 'numpy.float64'>
    
    guess_proto_dtype_name(11) -> 'TensorProto.DOUBLE'
    
    guess_dtype(11) -> <class 'numpy.float64'>
    
    ---ERRORS-------------
    
    guess_initial_types0('tensor(double)') -> TypeError("data type 'tensor(double)' not understood")
    guess_initial_types0(DoubleTensorType(shape=[None, 4])) -> TypeError("Cannot interpret 'DoubleTensorType(shape=[None, 4])' as a data type")
    guess_initial_types0(11) -> TypeError("Cannot interpret '11' as a data type")
    
    
    guess_data_type(<class 'numpy.float64'>) -> NotImplementedError("Unsupported data_type <attribute 'dtype' of 'numpy.generic' objects> (type=<class 'getset_descriptor'>). You may raise an issue at https://github.com/onnx/sklearn-onnx/issues.")
    guess_data_type(<class 'float'>) -> TypeError("Type <class 'type'> cannot be converted into a DataType. You may raise an issue at https://github.com/onnx/sklearn-onnx/issues.")
    guess_data_type('double') -> NotImplementedError("Unsupported data_type 'double'. You may raise an issue at https://github.com/onnx/sklearn-onnx/issues.")
    guess_data_type(DoubleTensorType(shape=[None, 4])) -> TypeError("Type <class 'onnxconverter_common.data_types.DoubleTensorType'> cannot be converted into a DataType. You may raise an issue at https://github.com/onnx/sklearn-onnx/issues.")
    guess_data_type(11) -> TypeError("Type <class 'int'> cannot be converted into a DataType. You may raise an issue at https://github.com/onnx/sklearn-onnx/issues.")
    
    guess_numpy_type(<class 'float'>) -> NotImplementedError("Unsupported data_type '<class 'float'>'.")
    guess_numpy_type('double') -> NotImplementedError("Unsupported data_type 'double'.")
    guess_numpy_type('tensor(double)') -> NotImplementedError("Unsupported data_type 'tensor(double)'.")
    guess_numpy_type(11) -> NotImplementedError("Unsupported data_type '11'.")
    
    guess_proto_type(<class 'numpy.float64'>) -> NotImplementedError("Unsupported data_type '<class 'numpy.float64'>'.")
    guess_proto_type(<class 'float'>) -> NotImplementedError("Unsupported data_type '<class 'float'>'.")
    guess_proto_type('double') -> NotImplementedError("Unsupported data_type 'double'.")
    guess_proto_type('tensor(double)') -> NotImplementedError("Unsupported data_type 'tensor(double)'.")
    guess_proto_type(11) -> NotImplementedError("Unsupported data_type '11'.")
    
    guess_tensor_type(<class 'numpy.float64'>) -> TypeError("data_type is not a tensor type but '<class 'type'>'.")
    guess_tensor_type(<class 'float'>) -> TypeError("data_type is not a tensor type but '<class 'type'>'.")
    guess_tensor_type('double') -> TypeError("data_type is not a tensor type but '<class 'str'>'.")
    guess_tensor_type('tensor(double)') -> TypeError("data_type is not a tensor type but '<class 'str'>'.")
    guess_tensor_type(11) -> TypeError("data_type is not a tensor type but '<class 'int'>'.")
    
    _guess_type_proto1(<class 'numpy.float64'>) -> NotImplementedError("Unsupported data_type '<class 'numpy.float64'>'. You may raise an issue at https://github.com/onnx/sklearn-onnx/issues.")
    _guess_type_proto1(<class 'float'>) -> NotImplementedError("Unsupported data_type '<class 'float'>'. You may raise an issue at https://github.com/onnx/sklearn-onnx/issues.")
    _guess_type_proto1('double') -> NotImplementedError("Unsupported data_type 'double'. You may raise an issue at https://github.com/onnx/sklearn-onnx/issues.")
    _guess_type_proto1('tensor(double)') -> NotImplementedError("Unsupported data_type 'tensor(double)'. You may raise an issue at https://github.com/onnx/sklearn-onnx/issues.")
    _guess_type_proto1(DoubleTensorType(shape=[None, 4])) -> NotImplementedError("Unsupported data_type 'DoubleTensorType(shape=[None, 4])'. You may raise an issue at https://github.com/onnx/sklearn-onnx/issues.")
    
    _guess_numpy_type1(<class 'float'>) -> NotImplementedError("Unsupported data_type <class 'float'> (type=<class 'type'>). You may raise an issue at https://github.com/onnx/sklearn-onnx/issues.")
    _guess_numpy_type1('double') -> NotImplementedError("Unsupported data_type 'double' (type=<class 'str'>). You may raise an issue at https://github.com/onnx/sklearn-onnx/issues.")
    _guess_numpy_type1('tensor(double)') -> NotImplementedError("Unsupported data_type 'tensor(double)' (type=<class 'str'>). You may raise an issue at https://github.com/onnx/sklearn-onnx/issues.")
    _guess_numpy_type1(DoubleTensorType(shape=[None, 4])) -> NotImplementedError("Unsupported data_type DoubleTensorType(shape=[None, 4]) (type=<class 'onnxconverter_common.data_types.DoubleTensorType'>). You may raise an issue at https://github.com/onnx/sklearn-onnx/issues.")
    _guess_numpy_type1(11) -> NotImplementedError("Unsupported data_type 11 (type=<class 'int'>). You may raise an issue at https://github.com/onnx/sklearn-onnx/issues.")
    
    guess_initial_types_mlprodict0('tensor(double)') -> TypeError("data type 'tensor(double)' not understood")
    guess_initial_types_mlprodict0(DoubleTensorType(shape=[None, 4])) -> TypeError("Cannot interpret 'DoubleTensorType(shape=[None, 4])' as a data type")
    guess_initial_types_mlprodict0(11) -> TypeError("Cannot interpret '11' as a data type")
    
    
    guess_numpy_type_from_string(<class 'numpy.float64'>) -> ValueError("Unable to guess numpy dtype from <class 'numpy.float64'>.")
    guess_numpy_type_from_string(<class 'float'>) -> ValueError("Unable to guess numpy dtype from <class 'float'>.")
    guess_numpy_type_from_string('tensor(double)') -> ValueError("Unable to guess numpy dtype from 'tensor(double)'.")
    guess_numpy_type_from_string(DoubleTensorType(shape=[None, 4])) -> ValueError('Unable to guess numpy dtype from DoubleTensorType(shape=[None, 4]).')
    guess_numpy_type_from_string(11) -> ValueError('Unable to guess numpy dtype from 11.')
    
    guess_numpy_type_from_dtype('tensor(double)') -> ValueError("Unable to guess numpy dtype from 'tensor(double)'.")
    guess_numpy_type_from_dtype(DoubleTensorType(shape=[None, 4])) -> ValueError('Unable to guess numpy dtype from DoubleTensorType(shape=[None, 4]).')
    guess_numpy_type_from_dtype(11) -> ValueError('Unable to guess numpy dtype from 11.')
    
    guess_proto_dtype_name(<class 'numpy.float64'>) -> RuntimeError("Unable to guess type for dtype=<class 'numpy.float64'>.")
    guess_proto_dtype_name(<class 'float'>) -> RuntimeError("Unable to guess type for dtype=<class 'float'>.")
    guess_proto_dtype_name('double') -> RuntimeError('Unable to guess type for dtype=double.')
    guess_proto_dtype_name('tensor(double)') -> RuntimeError('Unable to guess type for dtype=tensor(double).')
    guess_proto_dtype_name(DoubleTensorType(shape=[None, 4])) -> RuntimeError('Unable to guess type for dtype=DoubleTensorType(shape=[None, 4]).')
    
    guess_dtype(<class 'numpy.float64'>) -> ValueError("Unable to convert proto_type <class 'numpy.float64'> to numpy type.")
    guess_dtype(<class 'float'>) -> ValueError("Unable to convert proto_type <class 'float'> to numpy type.")
    guess_dtype('double') -> ValueError('Unable to convert proto_type double to numpy type.')
    guess_dtype('tensor(double)') -> ValueError('Unable to convert proto_type tensor(double) to numpy type.')
    guess_dtype(DoubleTensorType(shape=[None, 4])) -> ValueError('Unable to convert proto_type DoubleTensorType(shape=[None, 4]) to numpy type.')