module ml._neural_tree_api
#
Short summary#
module mlstatpy.ml._neural_tree_api
Conversion from tree to neural network.
Classes#
class |
truncated documentation |
---|---|
Declaration of function needed to train a model. |
Properties#
property |
truncated documentation |
---|---|
Returns the weights. |
Methods#
method |
truncated documentation |
---|---|
Computes the loss derivative due to prediction error. |
|
Creates a cache with intermediate results. |
|
Fits a neuron. |
|
Computes the gradient in X knowing the expected value y. |
|
Computes the gradient in X. |
|
Computes the loss. Returns a float. |
|
Updates weights. |
Documentation#
Conversion from tree to neural network.
- class mlstatpy.ml._neural_tree_api._TrainingAPI#
Bases :
object
Declaration of function needed to train a model.
- dlossds(X, y, cache=None)#
Computes the loss derivative due to prediction error.
- fill_cache(X)#
Creates a cache with intermediate results.
- fit(X, y, optimizer=None, max_iter=100, early_th=None, verbose=False, lr=None, lr_schedule=None, l1=0.0, l2=0.0, momentum=0.9)#
Fits a neuron.
- Paramètres:
X – training set
y – training labels
optimizer – optimizer, by default, it is
SGDOptimizer
.max_iter – number maximum of iterations
early_th – early stopping threshold
verbose – more verbose
lr – to overwrite learning_rate_init if optimizer is None (unused otherwise)
lr_schedule – to overwrite lr_schedule if optimizer is None (unused otherwise)
l1 – L1 regularization if optimizer is None (unused otherwise)
l2 – L2 regularization if optimizer is None (unused otherwise)
momentum – used if optimizer is None
- Renvoie:
self
- gradient(X, y, inputs=False)#
Computes the gradient in X knowing the expected value y.
- Paramètres:
X – computes the gradient in X
y – expected values
inputs – if False, derivative against the coefficients, otherwise against the inputs.
- Renvoie:
gradient
- gradient_backward(graddx, X, inputs=False, cache=None)#
Computes the gradient in X.
- Paramètres:
graddx – existing gradient against the outputs
X – computes the gradient in X
inputs – if False, derivative against the coefficients, otherwise against the inputs.
cache – cache intermediate results to avoid more computation
- Renvoie:
gradient
- loss(X, y, cache=None)#
Computes the loss. Returns a float.
- property training_weights#
Returns the weights.
- update_training_weights(grad, add=True)#
Updates weights.
- Paramètres:
grad – vector to add to the weights such as gradient
add – addition or replace