module onnxrt.ops_cpu.op_matmul#

Inheritance diagram of mlprodict.onnxrt.ops_cpu.op_matmul

Short summary#

module mlprodict.onnxrt.ops_cpu.op_matmul

Runtime operator.

source on GitHub

Classes#

class

truncated documentation

MatMul

MatMul ====== Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html

Properties#

property

truncated documentation

args_default

Returns the list of arguments as well as the list of parameters with the default values (close to the signature). …

args_default_modified

Returns the list of modified parameters.

args_mandatory

Returns the list of optional arguments.

args_optional

Returns the list of optional arguments.

atts_value

Returns all parameters in a dictionary.

Methods#

method

truncated documentation

__init__

_run

to_python

Documentation#

Runtime operator.

source on GitHub

class mlprodict.onnxrt.ops_cpu.op_matmul.MatMul(onnx_node, desc=None, **options)#

Bases: OpRunBinaryNum

Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html

Inputs

  • A (heterogeneous)T: N-dimensional matrix A

  • B (heterogeneous)T: N-dimensional matrix B

Outputs

  • Y (heterogeneous)T: Matrix multiply results from A * B

Type Constraints

  • T tensor(float16), tensor(float), tensor(double), tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(bfloat16): Constrain input and output types to float/int tensors.

Version

Onnx name: MatMul

This version of the operator has been available since version 13.

Runtime implementation: MatMul

__init__(onnx_node, desc=None, **options)#
_run(a, b, attributes=None, verbose=0, fLOG=None)#

Should be overwritten.

source on GitHub

to_python(inputs)#

Returns a python code equivalent to this operator.

Parameters:

inputs – inputs name

Returns:

imports, python code, both as strings

source on GitHub