module onnxrt.ops_cpu.op_conv#

Inheritance diagram of mlprodict.onnxrt.ops_cpu.op_conv

Short summary#

module mlprodict.onnxrt.ops_cpu.op_conv

Runtime operator.

source on GitHub

Classes#

class

truncated documentation

Conv

Conv ==== The convolution operator consumes an input tensor and a filter, and computes the output. Attributes

Properties#

property

truncated documentation

args_default

Returns the list of arguments as well as the list of parameters with the default values (close to the signature). …

args_default_modified

Returns the list of modified parameters.

args_mandatory

Returns the list of optional arguments.

args_optional

Returns the list of optional arguments.

atts_value

Returns all parameters in a dictionary.

Methods#

method

truncated documentation

__init__

_infer_shapes

_infer_sizes

_infer_types

_init

_run

Documentation#

Runtime operator.

source on GitHub

class mlprodict.onnxrt.ops_cpu.op_conv.Conv(onnx_node, desc=None, **options)#

Bases: OpRun

The convolution operator consumes an input tensor and a filter, and computes the output.

Attributes

  • auto_pad: auto_pad must be either NOTSET, SAME_UPPER, SAME_LOWER or VALID. Where default value is NOTSET, which means explicit padding is used. SAME_UPPER or SAME_LOWER mean pad the input so that output_shape[i] = ceil(input_shape[i] / strides[i]) for each axis i. The padding is split between the two sides equally or almost equally (depending on whether it is even or odd). In case the padding is an odd number, the extra padding is added at the end for SAME_UPPER and at the beginning for SAME_LOWER. Default value is nameautopadsNOTSETtypeSTRING (STRING)

  • dilations: dilation value along each spatial axis of the filter. If not present, the dilation defaults is 1 along each spatial axis. default value cannot be automatically retrieved (INTS)

  • group: number of groups input channels and output channels are divided into. Default value is namegroupi1typeINT (INT)

  • kernel_shape: The shape of the convolution kernel. If not present, should be inferred from input W. default value cannot be automatically retrieved (INTS)

  • pads: Padding for the beginning and ending along each spatial axis, it can take any value greater than or equal to 0. The value represent the number of pixels added to the beginning and end part of the corresponding axis. pads format should be as follow [x1_begin, x2_begin…x1_end, x2_end,…], where xi_begin the number of pixels added at the beginning of axis i and xi_end, the number of pixels added at the end of axis i. This attribute cannot be used simultaneously with auto_pad attribute. If not present, the padding defaults to 0 along start and end of each spatial axis. default value cannot be automatically retrieved (INTS)

  • strides: Stride along each spatial axis. If not present, the stride defaults is 1 along each spatial axis. default value cannot be automatically retrieved (INTS)

Inputs

Between 2 and 3 inputs.

  • X (heterogeneous)T: Input data tensor from previous layer; has size (N x C x H x W), where N is the batch size, C is the number of channels, and H and W are the height and width. Note that this is for the 2D image. Otherwise the size is (N x C x D1 x D2 … x Dn). Optionally, if dimension denotation is in effect, the operation expects input data tensor to arrive with the dimension denotation of [DATA_BATCH, DATA_CHANNEL, DATA_FEATURE, DATA_FEATURE …].

  • W (heterogeneous)T: The weight tensor that will be used in the convolutions; has size (M x C/group x kH x kW), where C is the number of channels, and kH and kW are the height and width of the kernel, and M is the number of feature maps. For more than 2 dimensions, the kernel shape will be (M x C/group x k1 x k2 x … x kn), where (k1 x k2 x … kn) is the dimension of the kernel. Optionally, if dimension denotation is in effect, the operation expects the weight tensor to arrive with the dimension denotation of [FILTER_OUT_CHANNEL, FILTER_IN_CHANNEL, FILTER_SPATIAL, FILTER_SPATIAL …]. Assuming zero based indices for the shape array, X.shape[1] == (W.shape[1] * group) == C and W.shape[0] mod G == 0. Or in other words FILTER_IN_CHANNEL multiplied by the number of groups should be equal to DATA_CHANNEL and the number of feature maps M should be a multiple of the number of groups G.

  • B (optional, heterogeneous)T: Optional 1D bias to be added to the convolution, has size of M.

Outputs

  • Y (heterogeneous)T: Output data tensor that contains the result of the convolution. The output dimensions are functions of the kernel size, stride size, and pad lengths.

Type Constraints

  • T tensor(float16), tensor(float), tensor(double): Constrain input and output types to float tensors.

Version

Onnx name: Conv

This version of the operator has been available since version 11.

Runtime implementation: Conv

__init__(onnx_node, desc=None, **options)#
_infer_shapes(X, W, B=None)#

Should be overwritten.

source on GitHub

_infer_sizes(X, W, B=None)#

Should be overwritten.

source on GitHub

_infer_types(X, W, B=None)#

Should be overwritten.

source on GitHub

_init()#
_run(X, W, B=None, attributes=None, verbose=0, fLOG=None)#

Should be overwritten.

source on GitHub