module onnxrt.ops_cpu.op_matmul
#
Short summary#
module mlprodict.onnxrt.ops_cpu.op_matmul
Runtime operator.
Classes#
class |
truncated documentation |
---|---|
MatMul ====== Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html … |
Properties#
property |
truncated documentation |
---|---|
|
Returns the list of arguments as well as the list of parameters with the default values (close to the signature). … |
|
Returns the list of modified parameters. |
|
Returns the list of optional arguments. |
|
Returns the list of optional arguments. |
|
Returns all parameters in a dictionary. |
Methods#
method |
truncated documentation |
---|---|
Documentation#
Runtime operator.
- class mlprodict.onnxrt.ops_cpu.op_matmul.MatMul(onnx_node, desc=None, **options)#
Bases:
OpRunBinaryNum
Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html
Inputs
A (heterogeneous)T: N-dimensional matrix A
B (heterogeneous)T: N-dimensional matrix B
Outputs
Y (heterogeneous)T: Matrix multiply results from A * B
Type Constraints
T tensor(float16), tensor(float), tensor(double), tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(bfloat16): Constrain input and output types to float/int tensors.
Version
Onnx name: MatMul
This version of the operator has been available since version 13.
Runtime implementation:
MatMul
- __init__(onnx_node, desc=None, **options)#
- _run(a, b, attributes=None, verbose=0, fLOG=None)#
Should be overwritten.
- to_python(inputs)#
Returns a python code equivalent to this operator.
- Parameters:
inputs – inputs name
- Returns:
imports, python code, both as strings