module ml.neural_tree
¶
Classes¶
class |
truncated documentation |
---|---|
Node ensemble. |
Functions¶
function |
truncated documentation |
---|---|
Converts a binary class label into a matrix with two columns of probabilities. |
Properties¶
property |
truncated documentation |
---|---|
Returns the shape of the coefficients. |
|
Returns the weights. |
Static Methods¶
staticmethod |
truncated documentation |
---|---|
Implements strategy one. See @see meth create_from_tree. |
|
Implements strategy one. See @see meth create_from_tree. |
|
Creates a |
Methods¶
method |
truncated documentation |
---|---|
Retrieves node and attributes for node i. |
|
Returns the number of nodes |
|
usual |
|
Common beginning to methods loss, dlossds, dlossdw. |
|
Retrieves the output nodes. nb_last is the number of expected outputs. |
|
Updates internal members. |
|
Appends a node into the graph. |
|
Clear all nodes |
|
|
|
Computes the loss derivative against the inputs. |
|
Creates a cache with intermediate results. |
|
Computes the gradient in X. |
|
Computes the loss due to prediction error. Returns a float. |
|
|
|
Exports the neural network into dot. |
|
Updates weights. |
Documentation¶
Conversion from tree to neural network.
-
class
mlstatpy.ml.neural_tree.
NeuralTreeNet
(dim, empty=True)¶ Bases :
mlstatpy.ml._neural_tree_api._TrainingAPI
Node ensemble.
<<<
import numpy from mlstatpy.ml.neural_tree import NeuralTreeNode, NeuralTreeNet w1 = numpy.array([-0.5, 0.8, -0.6]) neu = NeuralTreeNode(w1[1:], bias=w1[0], activation='sigmoid') net = NeuralTreeNet(2, empty=True) net.append(neu, numpy.arange(2)) ide = NeuralTreeNode(numpy.array([1.]), bias=numpy.array([0.]), activation='identity') net.append(ide, numpy.arange(2, 3)) X = numpy.abs(numpy.random.randn(10, 2)) pred = net.predict(X) print(pred)
>>>
[[0.178 0.053 0.404 0.404] [1.626 0.318 0.648 0.648] [0.289 0.52 0.359 0.359] [0.75 1.967 0.253 0.253] [2.928 1.171 0.758 0.758] [0.809 1.541 0.315 0.315] [1.549 0.13 0.66 0.66 ] [0.144 0.202 0.376 0.376] [0.056 0.437 0.328 0.328] [0.712 0.08 0.505 0.505]]
- Paramètres
dim – space dimension
empty – empty network, other adds an identity node
-
__getitem__
(i)¶ Retrieves node and attributes for node i.
-
__init__
(dim, empty=True)¶ - Paramètres
dim – space dimension
empty – empty network, other adds an identity node
-
__len__
()¶ Returns the number of nodes
-
__repr__
()¶ usual
-
_common_loss_dloss
(X, y, cache=None)¶ Common beginning to methods loss, dlossds, dlossdw.
-
static
_create_from_tree_compact
(tree, k=1.0)¶ Implements strategy one. See @see meth create_from_tree.
-
static
_create_from_tree_one
(tree, k=1.0)¶ Implements strategy one. See @see meth create_from_tree.
-
_get_output_node_attr
(nb_last)¶ Retrieves the output nodes. nb_last is the number of expected outputs.
-
_predict_one
(X)¶
-
_update_members
(node=None, attr=None)¶ Updates internal members.
-
append
(node, inputs)¶ Appends a node into the graph.
- Paramètres
node – node to add
inputs – index of input nodes
-
clear
()¶ Clear all nodes
-
static
create_from_tree
(tree, k=1.0, arch='one')¶ Creates a
NeuralTreeNet
instance from a DecisionTreeClassifier- Paramètres
tree – DecisionTreeClassifier
k – slant of the sigmoïd
arch – architecture, see below
- Renvoie
The function only works for binary problems. Available architecture: * “one”: the method adds nodes with one output, there
is no soecific definition of layers,
“compact”: the adds two nodes, the first computes the threshold, the second one computes the leaves output, a final node merges all outputs into one
See notebook Un arbre de décision en réseaux de neurones for examples.
-
dlossds
(X, y, cache=None)¶ Computes the loss derivative against the inputs.
-
fill_cache
(X)¶ Creates a cache with intermediate results.
-
gradient_backward
(graddx, X, inputs=False, cache=None)¶ Computes the gradient in X.
- Paramètres
graddx – existing gradient against the inputs
X – computes the gradient in X
inputs – if False, derivative against the coefficients, otherwise against the inputs.
cache – cache intermediate results to avoid more computation
- Renvoie
gradient
-
loss
(X, y, cache=None)¶ Computes the loss due to prediction error. Returns a float.
-
property
shape
¶ Returns the shape of the coefficients.
-
property
training_weights
¶ Returns the weights.
-
update_training_weights
(X, add=True)¶ Updates weights.
- Paramètres
grad – vector to add to the weights such as gradient
add – addition or replace
-
mlstatpy.ml.neural_tree.
label_class_to_softmax_output
(y_label)¶ Converts a binary class label into a matrix with two columns of probabilities.
<<<
import numpy from mlstatpy.ml.neural_tree import label_class_to_softmax_output y_label = numpy.array([0, 1, 0, 0]) soft_y = label_class_to_softmax_output(y_label) print(soft_y)
>>>
[[1. 0.] [0. 1.] [1. 0.] [1. 0.]]