# Classes¶

## Summary¶

class |
class parent |
truncated documentation |
---|---|---|

Abs === Absolute takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the absolute is, y = … |
||

Add === Performs element-wise binary addition (with Numpy-style broadcasting support). This operator supports **multidirectional … |
||

And === Returns the tensor resulted from performing the and logical operation elementwise on the input tensors A and … |
||

ArgMax ====== Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulting … |
||

ArgMax ====== Computes the indices of the max elements of the input tensor’s element along the provided axis. The resulting … |
||

ArgMin ====== Computes the indices of the min elements of the input tensor’s element along the provided axis. The resulting … |
||

ArgMin ====== Computes the indices of the min elements of the input tensor’s element along the provided axis. The resulting … |
||

ArrayFeatureExtractor (ai.onnx.ml) ================================== Select elements of the input tensor based on the … |
||

Mocks an array without changing the data it receives. Notebooks Time processing for every ONNX nodes in a graph illustrates the weaknesses … |
||

Atan ==== Calculates the arctangent (inverse of tangent) of the given input tensor, element-wise. |
||

Extends the API to automatically look for exporters. |
||

Extends the API to automatically look for exporters. |
||

Base class to |
||

BatchNormalization ================== Carries out batch normalization as described in the paper https://arxiv.org/abs/1502.03167. … |
||

Binarizer (ai.onnx.ml) ====================== Maps the values of the input tensor to either 0 or 1, element-wise, based … |
||

CDist (mlprodict) ================= |
||

Defines a schema for operators added in this package such as |
||

Cast ==== The operator casts the elements of a given input tensor to a data type specified by the ‘to’ argument and returns … |
||

Ceil ==== Ceil takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the ceil is, y = ceil(x), … |
||

Celu ==== Continuously Differentiable Exponential Linear Units: Perform the linear unit element-wise on the input tensor … |
||

Clip ==== Clip operator limits the given input within an interval. The interval is specified by the inputs ‘min’ and ‘max’. … |
||

Clip ==== Clip operator limits the given input within an interval. The interval is specified by the inputs ‘min’ and ‘max’. … |
||

Clip ==== Clip operator limits the given input within an interval. The interval is specified with arguments ‘min’ and … |
||

Defines a visitor which walks though the syntax tree of the code. |
||

Visits the code, implements verification rules. |
||

Class which converts a Python function into something else. It must implements methods |
||

Raised when a compilation error was detected. |
||

Concat ====== Concatenate a list of tensors into a single tensor. All input tensors must have the same shape, except for … |
||

ConstantOfShape =============== Generate a tensor with given value and shape. |
||

Constant ======== This operator produces a constant tensor. Exactly one of the provided attributes, either value, sparse_value, … |
||

Constant ======== This operator produces a constant tensor. Exactly one of the provided attributes, either value, sparse_value, … |
||

Conv ==== The convolution operator consumes an input tensor and a filter, and computes the output. |
||

Implements float runtime for operator Conv. The code is inspired from conv.cc … |
||

Implements float runtime for operator Conv. The code is inspired from conv.cc … |
||

ConvTranspose ============= The convolution transpose operator consumes an input tensor and a filter, and computes the … |
||

Implements float runtime for operator Conv. The code is inspired from conv_transpose.cc … |
||

Implements float runtime for operator Conv. The code is inspired from conv_transpose.cc … |
||

CumSum ====== Performs cumulative sum of the input elements along the given axis. By default, it will do the sum inclusively … |
||

Wraps a scoring function into a transformer. Function @see fn register_scorers must be called to register the converter … |
||

DequantizeLinear ================ The linear dequantization operator. It consumes a quantized tensor, a scale, and a zero … |
||

DictVectorizer (ai.onnx.ml) =========================== Uses an index mapping to convert a dictionary to an array. Given … |
||

One dimension of a shape. |
||

Div === Performs element-wise binary division (with Numpy-style broadcasting support). This operator supports **multidirectional … |
||

Dropout ======= Dropout takes an input floating-point tensor, an optional input ratio (floating-point scalar) and an optional … |
||

Dropout ======= Dropout takes an input floating-point tensor, an optional input ratio (floating-point scalar) and an optional … |
||

Dropout ======= Dropout takes one input data (Tensor<float>) and produces two Tensor outputs, output (Tensor<float>) and … |
||

Einsum ====== An einsum of the form |
||

Equal ===== Returns the tensor resulted from performing the equal logical operation elementwise on the input tensors … |
||

Erf === Computes the error function of the given input tensor element-wise. |
||

Exp === Calculates the exponential of the given input tensor, element-wise. |
||

Expected failure. |
||

EyeLike ======= Generate a 2D tensor (matrix) with ones on the diagonal and zeros everywhere else. Only 2D tensors are … |
||

Very similar to |
||

Flatten ======= Flattens the input tensor into a 2D matrix. If input tensor has shape (d_0, d_1, . |
||

Raised when a float is out of range and cannot be converted into a float32. |
||

Floor ===== Floor takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the floor is, y = floor(x), … |
||

Gather ====== Given data tensor of rank r >= 1, and indices tensor of rank q, gather entries of the axis dimension … |
||

Implements runtime for operator Gather. The code is inspired from tfidfvectorizer.cc … |
||

GatherElements ============== GatherElements takes two inputs data and indices of the same rank r >= 1 and an optional … |
||

Implements runtime for operator Gather. The code is inspired from tfidfvectorizer.cc … |
||

Implements runtime for operator Gather. The code is inspired from tfidfvectorizer.cc … |
||

Gemm ==== General Matrix multiplication: https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3 A’ = transpose(A) … |
||

GlobalAveragePool ================= GlobalAveragePool consumes an input tensor X and applies average pooling across the … |
||

Greater ======= Returns the tensor resulted from performing the greater logical operation elementwise on the input tensors … |
||

GreaterOrEqual ============== Returns the tensor resulted from performing the greater_equal logical operation elementwise … |
||

Identity ======== Identity operator |
||

If == If conditional |
||

Raised if the code shows errors. |
||

Imputer (ai.onnx.ml) ==================== Replaces inputs that equal one value with another, leaving all other elements … |
||

Overwrites class |
||

IsNaN ===== Returns which elements of the input are NaN. |
||

LabelEncoder (ai.onnx.ml) ========================= Maps each element in the input tensor to another value. The mapping … |
||

Less ==== Returns the tensor resulted from performing the less logical operation elementwise on the input tensors A … |
||

LessOrEqual =========== Returns the tensor resulted from performing the less_equal logical operation elementwise on … |
||

LinearClassifier (ai.onnx.ml) ============================= Linear classifier |
||

LinearRegressor (ai.onnx.ml) ============================ Generalized linear regression evaluation. If targets is set … |
||

Log === Calculates the natural log of the given input tensor, element-wise. |
||

Loop ==== Generic Looping construct. This loop has multiple termination conditions: 1) Trip count. Iteration count specified … |
||

LpNormalization =============== Given a matrix, apply Lp-normalization along the provided axis. |
||

Base class for every action. |
||

Addition |
||

Any binary operation. |
||

Cast into another type. |
||

Concatenate number of arrays into an array. |
||

Constant |
||

A function. |
||

Any function call. |
||

Addition |
||

Returns a results. |
||

Sign of an expression: 1=positive, 0=negative. |
||

Tensor addition. |
||

Tensor division. |
||

Scalar product. |
||

Tensor multiplication. |
||

Tensor soustraction. |
||

Extracts an element of the tensor. |
||

Tensor operation. |
||

Operator |
||

Operator |
||

Any binary operation. |
||

Variable. The constant is only needed to guess the variable type. |
||

Base class for every machine learned model |
||

Base class for numerical types. |
||

A numpy.bool. |
||

A numpy.float32. |
||

A numpy.float64. |
||

A numpy.int32. |
||

A numpy.int64. |
||

int32 or float32 |
||

Defines a tensor with a dimension and a single type for what it contains. |
||

Base class for every type. |
||

MatMul ====== Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html … |
||

Max === Element-wise max of each of the input tensors (with Numpy-style broadcasting support). All inputs and outputs … |
||

MaxPool ======= MaxPool consumes an input tensor X and applies max pooling across the tensor according to kernel sizes, … |
||

Implements float runtime for operator Conv. The code is inspired from pool.cc … |
||

Implements float runtime for operator Conv. The code is inspired from pool.cc … |
||

Mean ==== Element-wise mean of each of the input tensors (with Numpy-style broadcasting support). All inputs and outputs … |
||

Min === Element-wise min of each of the input tensors (with Numpy-style broadcasting support). All inputs and outputs … |
||

Raised when a variable is missing. |
||

A string. |
||

A string and a shape. |
||

A string and a shape and a type. |
||

Mocked lightgbm. |
||

Mul === Performs element-wise binary multiplication (with Numpy-style broadcasting support). This operator supports **multidirectional … |
||

Neg === Neg takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where each element flipped sign, … |
||

Defines a schema for operators added in this package such as |
||

Normalizer (ai.onnx.ml) ======================= Normalize the input. There are three normalization modes, which have … |
||

Not === Returns the negation of the input tensor element-wise. |
||

ONNX specifications does not mention the possibility to change the output type, sparse, dense, float, double. … |
||

Expected failure. |
||

Raised when onnxruntime or mlprodict does not implement a new operator defined in the latest onnx. … |
||

Loads an ONNX file or object or stream. Computes the output of the ONNX graph. Several runtimes … |
||

onnxruntime API |
||

Implements methods to export a instance of |
||

A node to execute. |
||

Defines magic commands to help with notebooks |
||

The pipeline overwrites method |
||

Raised when a new operator was added but cannot be found. |
||

Defines a custom operator not defined by ONNX specifications but in onnxruntime. |
||

Defines a custom operator not defined by ONNX specifications but in onnxruntime. |
||

Calls onnxruntime or the runtime implemented in this package to transform input based on a ONNX graph. It … |
||

Class which converts a Python function into an ONNX function. It must implements methods |
||

Runs the prediction for a single ONNX, it lets the runtime handle the graph logic as well. |
||

Ancestor to all operators in this subfolder. The runtime for every node can checked into ONNX unit tests. … |
||

Ancestor to all unary operators in this subfolder and which produces position of extremas (ArgMax, …). Checks … |
||

Ancestor to all binary operators in this subfolder. Checks that inputs type are the same. |
||

Ancestor to all binary operators in this subfolder. Checks that inputs type are the same. |
||

Implements the inplaces logic. |
||

Ancestor to all binary operators in this subfolder. Checks that inputs type are the same. |
||

Automates some methods for custom operators defined outside |
||

Unique operator which calls onnxruntime to compute predictions for one operator. |
||

Implements the reduce logic. It must have a parameter |
||

Ancestor to all unary operators in this subfolder. Checks that inputs type are the same. |
||

Ancestor to all unary and numerical operators in this subfolder. Checks that inputs type are the same. |
||

Defines a schema for operators added in this package such as |
||

Or == Returns the tensor resulted from performing the or logical operation elementwise on the input tensors A and … |
||

Pad === Given a tensor containing the data to be padded (data), a tensor containing the number of start and end pad … |
||

Pow === Pow takes input data (Tensor<T>) and exponent Tensor, and produces one output data (Tensor<T>) where the function … |
||

QuantizeLinear ============== The linear quantization operator. It consumes a high precision tensor, a scale, and a zero … |
||

RNN === Computes an one-layer simple RNN. This operator is usually supported via some custom implementation such as CuDNN. … |
||

Reciprocal ========== Reciprocal takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the reciprocal … |
||

ReduceLogSumExp =============== Computes the log sum exponent of the input tensor’s element along the provided axes. The … |
||

ReduceMax ========= Computes the max of the input tensor’s element along the provided axes. The resulted tensor has the … |
||

ReduceMean ========== Computes the mean of the input tensor’s element along the provided axes. The resulted tensor has … |
||

ReduceMin ========= Computes the min of the input tensor’s element along the provided axes. The resulted tensor has the … |
||

ReduceProd ========== Computes the product of the input tensor’s element along the provided axes. The resulted tensor … |
||

ReduceSumSquare =============== Computes the sum square of the input tensor’s element along the provided axes. The resulted … |
||

ReduceSum ========= Computes the sum of the input tensor’s element along the provided axes. The resulted tensor has the … |
||

ReduceSum ========= Computes the sum of the input tensor’s element along the provided axes. The resulted tensor has the … |
||

ReduceSum ========= Computes the sum of the input tensor’s element along the provided axes. The resulted tensor has the … |
||

Relu ==== Relu takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the rectified linear function, … |
||

Reshape ======= Reshape the input tensor similar to numpy.reshape. First input is the data tensor, second input is a shape … |
||

Raised when the results are too different from scikit-learn. |
||

Implements runtime for operator SVMClassifierDouble. The code is inspired from svm_classifier.cc … |
||

Implements runtime for operator SVMClassifier. The code is inspired from svm_classifier.cc … |
||

Implements Double runtime for operator SVMRegressor. The code is inspired from svm_regressor.cc … |
||

Implements float runtime for operator SVMRegressor. The code is inspired from svm_regressor.cc … |
||

Implements runtime for operator TfIdfVectorizer. The code is inspired from tfidfvectorizer.cc … |
||

Implements runtime for operator TreeEnsembleClassifier. The code is inspired from tree_ensemble_classifier.cc … |
||

Implements runtime for operator TreeEnsembleClassifier. The code is inspired from tree_ensemble_classifier.cc … |
||

Implements double runtime for operator TreeEnsembleClassifier. The code is inspired from tree_ensemble_Classifier.cc … |
||

Implements float runtime for operator TreeEnsembleClassifier. The code is inspired from tree_ensemble_Classifier.cc … |
||

Implements double runtime for operator TreeEnsembleRegressor. The code is inspired from tree_ensemble_regressor.cc … |
||

Implements float runtime for operator TreeEnsembleRegressor. The code is inspired from tree_ensemble_regressor.cc … |
||

Implements double runtime for operator TreeEnsembleRegressor. The code is inspired from tree_ensemble_regressor.cc … |
||

Implements float runtime for operator TreeEnsembleRegressor. The code is inspired from tree_ensemble_regressor.cc … |
||

Raised when a type of a variable is unexpected. |
||

SVMClassifier (ai.onnx.ml) ========================== Support Vector Machine classifier |
||

SVMClassifierDouble (mlprodict) =============================== |
||

Defines a schema for operators added in this package such as |
||

SVMRegressor (ai.onnx.ml) ========================= Support Vector Machine regression prediction and one-class SVM anomaly … |
||

SVMRegressorDouble (mlprodict) ============================== |
||

Defines a schema for operators added in this package such as |
||

Scaler (ai.onnx.ml) =================== Rescale input data, for example to standardize features by removing the mean and … |
||

Scan ==== Scan can be used to iterate over one or more scan_input tensors, constructing zero or more scan_output tensors. … |
||

Shape ===== Takes a tensor as input and outputs an 1D int64 tensor containing the shape of the input tensor. |
||

Base class for shape binary operator defined by a function. |
||

Base class for shape binary operator. |
||

Handles mathematical operations around shapes. It stores a type (numpy type), and a name to somehow have … |
||

Computes a shape depending on a user defined function. See |
||

Base class for all shapes operator. |
||

Shape addition. |
||

Shape comparison. |
||

Best on each dimension. |
||

Shape multiplication. |
||

Sigmoid ======= Sigmoid takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the sigmoid function, … |
||

Sign ==== Calculate the sign of the given input tensor element-wise. If input > 0, output 1. if input < 0, output -1. … |
||

Simple wrapper around InferenceSession which imitates OnnxInference. |
||

Sin === Calculates the sine of the given input tensor, element-wise. |
||

Slice ===== Produces a slice of the input tensor along multiple axes. Similar to numpy: https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html … |
||

Softmax ======= The operator computes the normalized exponential values for the given input: Softmax(input, axis) = … |
||

Solve (mlprodict) ================= |
||

Defines a schema for operators added in this package such as |
||

Runtime for operator |
||

Sqrt ==== Square root takes one input data (Tensor<T>) and produces one output data (Tensor<T>) where the square root … |
||

Squeeze ======= Remove single-dimensional entries from the shape of a tensor. Takes an input axes with a list of axes … |
||

The operator is not really threadsafe as python cannot play with two locales at the same time. stop words should … |
||

Sub === Performs element-wise binary subtraction (with Numpy-style broadcasting support). This operator supports **multidirectional … |
||

Sum === Element-wise sum of each of the input tensors (with Numpy-style broadcasting support). All inputs and outputs … |
||

asv test for a classifier, Full template can be found in common_asv_skl.py. … |
||

asv test for a classifier, Full template can be found in common_asv_skl.py. … |
||

asv example for a clustering algorithm, Full template can be found in common_asv_skl.py. … |
||

asv example for a classifier, Full template can be found in common_asv_skl.py. … |
||

asv example for an outlier detector, Full template can be found in common_asv_skl.py. … |
||

asv example for a regressor, Full template can be found in common_asv_skl.py. … |
||

asv example for a trainable transform, Full template can be found in common_asv_skl.py. … |
||

asv example for a transform, Full template can be found in common_asv_skl.py. … |
||

asv example for a transform, Full template can be found in common_asv_skl.py. … |
||

TfIdfVectorizer =============== This transform extracts n-grams from the input sequence and save them as a vector. Input … |
||

See Tokenizer. |
||

Defines a schema for operators added in this package such as |
||

TopK ==== Retrieve the top-K elements along a specified axis. Given an input tensor of shape [a_1, a_2, …, a_n, r] and … |
||

TopK ==== Retrieve the top-K elements along a specified axis. Given an input tensor of shape [a_1, a_2, …, a_n, r] and … |
||

TopK ==== Retrieve the top-K largest or smallest elements along a specified axis. Given an input tensor of shape [a_1, … |
||

TopK ==== Retrieve the top-K largest or smallest elements along a specified axis. Given an input tensor of shape [a_1, … |
||

Transpose ========= Transpose the input tensor similar to numpy.transpose. For example, when perm=(1, 0, 2), given an … |
||

TreeEnsembleClassifier (ai.onnx.ml) =================================== Tree Ensemble classifier. Returns the top class … |
||

TreeEnsembleClassifierDouble (mlprodict) ======================================== |
||

Defines a schema for operators added in this package such as |
||

TreeEnsembleRegressor (ai.onnx.ml) ================================== Tree Ensemble regressor. Returns the regressed … |
||

TreeEnsembleRegressorDouble (mlprodict) ======================================= |
||

Defines a schema for operators added in this package such as |
||

Unsqueeze ========= Insert single-dimensional entries to the shape of an input tensor (data). Takes one required input … |
||

Where ===== Return elements, either from X or Y, depending on condition (with Numpy-style broadcasting support). Where … |
||

A booster can be a classifier, a regressor. Trick to wrap it in a minimal function. |
||

Trick to wrap a LGBMClassifier into a class. |
||

converter for XGBClassifier |
||

common methods for converters |
||

converter class |
||

The class does not output a dictionary as specified in ONNX specifications but a |
||

Custom dictionary class much faster for this runtime, it implements a subset of the same methods. |
||

Base class for runtime for operator ArgMax. … |
||

Base class for runtime for operator ArgMin. … |
||

Labels strings are not natively implemented in C++ runtime. The class stores the strings labels, replaces them by … |
||

Common tests to all benchmarks testing converted scikit-learn models. See benchmark attributes. … |
||

Common class for a classifier. |
||

Common class for a classifier. |
||

Common class for a clustering algorithm. |
||

Common class for a multi-classifier. |
||

Common class for outlier detection. |
||

Common class for a regressor. |
||

Common class for a trainable transformer. |
||

Common class for a transformer. |
||

Common class for a transformer for positive features. |
||

Ths class hides a parameter used as a threshold above which the parallelisation is started: |
||