module ml._neural_tree_api#

Short summary#

module mlstatpy.ml._neural_tree_api

Conversion from tree to neural network.

source on GitHub

Classes#

class

truncated documentation

_TrainingAPI

Declaration of function needed to train a model.

Properties#

property

truncated documentation

training_weights

Returns the weights.

Methods#

method

truncated documentation

dlossds

Computes the loss derivative due to prediction error.

fill_cache

Creates a cache with intermediate results.

fit

Fits a neuron.

gradient

Computes the gradient in X knowing the expected value y.

gradient_backward

Computes the gradient in X.

loss

Computes the loss. Returns a float.

update_training_weights

Updates weights.

Documentation#

Conversion from tree to neural network.

source on GitHub

class mlstatpy.ml._neural_tree_api._TrainingAPI#

Bases : object

Declaration of function needed to train a model.

source on GitHub

dlossds(X, y, cache=None)#

Computes the loss derivative due to prediction error.

source on GitHub

fill_cache(X)#

Creates a cache with intermediate results.

source on GitHub

fit(X, y, optimizer=None, max_iter=100, early_th=None, verbose=False, lr=None, lr_schedule=None, l1=0.0, l2=0.0, momentum=0.9)#

Fits a neuron.

Paramètres:
  • X – training set

  • y – training labels

  • optimizer – optimizer, by default, it is SGDOptimizer.

  • max_iter – number maximum of iterations

  • early_th – early stopping threshold

  • verbose – more verbose

  • lr – to overwrite learning_rate_init if optimizer is None (unused otherwise)

  • lr_schedule – to overwrite lr_schedule if optimizer is None (unused otherwise)

  • l1 – L1 regularization if optimizer is None (unused otherwise)

  • l2 – L2 regularization if optimizer is None (unused otherwise)

  • momentum – used if optimizer is None

Renvoie:

self

source on GitHub

gradient(X, y, inputs=False)#

Computes the gradient in X knowing the expected value y.

Paramètres:
  • X – computes the gradient in X

  • y – expected values

  • inputs – if False, derivative against the coefficients, otherwise against the inputs.

Renvoie:

gradient

source on GitHub

gradient_backward(graddx, X, inputs=False, cache=None)#

Computes the gradient in X.

Paramètres:
  • graddx – existing gradient against the outputs

  • X – computes the gradient in X

  • inputs – if False, derivative against the coefficients, otherwise against the inputs.

  • cache – cache intermediate results to avoid more computation

Renvoie:

gradient

source on GitHub

loss(X, y, cache=None)#

Computes the loss. Returns a float.

source on GitHub

property training_weights#

Returns the weights.

update_training_weights(grad, add=True)#

Updates weights.

Paramètres:
  • grad – vector to add to the weights such as gradient

  • add – addition or replace

source on GitHub