module ml.neural_tree

Inheritance diagram of mlstatpy.ml.neural_tree

Short summary

module mlstatpy.ml.neural_tree

Conversion from tree to neural network.

source on GitHub

Classes

class

truncated documentation

NeuralTreeNet

Node ensemble.

Functions

function

truncated documentation

label_class_to_softmax_output

Converts a binary class label into a matrix with two columns of probabilities.

Properties

property

truncated documentation

shape

Returns the shape of the coefficients.

training_weights

Returns the weights.

Static Methods

staticmethod

truncated documentation

create_from_tree

Creates a NeuralTreeNet instance from a DecisionTreeClassifier

Methods

method

truncated documentation

__getitem__

Retrieves node and attributes for node i.

__init__

__len__

Returns the number of nodes

__repr__

usual

_common_loss_dloss

Common beginning to methods loss, dlossds, dlossdw.

_get_output_node_attr

Retrieves the output nodes. nb_last is the number of expected outputs.

_predict_one

_update_members

Updates internal members.

append

Appends a node into the graph.

clear

Clear all nodes

copy

dlossds

Computes the loss derivative against the inputs.

fill_cache

Creates a cache with intermediate results.

gradient_backward

Computes the gradient in X.

loss

Computes the loss due to prediction error. Returns a float.

predict

to_dot

Exports the neural network into dot.

update_training_weights

Updates weights.

Documentation

Conversion from tree to neural network.

source on GitHub

class mlstatpy.ml.neural_tree.NeuralTreeNet(dim, empty=True)[source]

Bases : mlstatpy.ml._neural_tree_api._TrainingAPI

Node ensemble.

<<<

import numpy
from mlstatpy.ml.neural_tree import NeuralTreeNode, NeuralTreeNet

w1 = numpy.array([-0.5, 0.8, -0.6])

neu = NeuralTreeNode(w1[1:], bias=w1[0], activation='sigmoid')
net = NeuralTreeNet(2, empty=True)
net.append(neu, numpy.arange(2))

ide = NeuralTreeNode(numpy.array([1.]),
                     bias=numpy.array([0.]),
                     activation='identity')

net.append(ide, numpy.arange(2, 3))

X = numpy.abs(numpy.random.randn(10, 2))
pred = net.predict(X)
print(pred)

>>>

    [[0.54  0.622 0.391 0.391]
     [0.25  2.024 0.18  0.18 ]
     [0.006 0.824 0.271 0.271]
     [0.719 0.335 0.469 0.469]
     [1.386 1.167 0.477 0.477]
     [2.488 0.977 0.712 0.712]
     [1.385 1.481 0.43  0.43 ]
     [0.837 0.237 0.507 0.507]
     [0.387 0.797 0.339 0.339]
     [0.89  1.208 0.375 0.375]]

source on GitHub

Paramètres
  • dim – space dimension

  • empty – empty network, other adds an identity node

source on GitHub

__getitem__(i)[source]

Retrieves node and attributes for node i.

__init__(dim, empty=True)[source]
Paramètres
  • dim – space dimension

  • empty – empty network, other adds an identity node

source on GitHub

__len__()[source]

Returns the number of nodes

__repr__()[source]

usual

_common_loss_dloss(X, y, cache=None)[source]

Common beginning to methods loss, dlossds, dlossdw.

source on GitHub

_get_output_node_attr(nb_last)[source]

Retrieves the output nodes. nb_last is the number of expected outputs.

source on GitHub

_predict_one(X)[source]
_update_members(node=None, attr=None)[source]

Updates internal members.

append(node, inputs)[source]

Appends a node into the graph.

Paramètres
  • node – node to add

  • inputs – index of input nodes

source on GitHub

clear()[source]

Clear all nodes

static create_from_tree(tree, k=1.0)[source]

Creates a NeuralTreeNet instance from a DecisionTreeClassifier

Paramètres
Renvoie

NeuralTreeNet

The function only works for binary problems.

source on GitHub

dlossds(X, y, cache=None)[source]

Computes the loss derivative against the inputs.

source on GitHub

fill_cache(X)[source]

Creates a cache with intermediate results.

source on GitHub

gradient_backward(graddx, X, inputs=False, cache=None)[source]

Computes the gradient in X.

Paramètres
  • graddx – existing gradient against the inputs

  • X – computes the gradient in X

  • inputs – if False, derivative against the coefficients, otherwise against the inputs.

  • cache – cache intermediate results to avoid more computation

Renvoie

gradient

source on GitHub

loss(X, y, cache=None)[source]

Computes the loss due to prediction error. Returns a float.

source on GitHub

property shape

Returns the shape of the coefficients.

to_dot(X=None)[source]

Exports the neural network into dot.

Paramètres

X – input as an example

source on GitHub

property training_weights

Returns the weights.

update_training_weights(X, add=True)[source]

Updates weights.

Paramètres
  • grad – vector to add to the weights such as gradient

  • add – addition or replace

source on GitHub

mlstatpy.ml.neural_tree.label_class_to_softmax_output(y_label)[source]

Converts a binary class label into a matrix with two columns of probabilities.

<<<

import numpy
from mlstatpy.ml.neural_tree import label_class_to_softmax_output

y_label = numpy.array([0, 1, 0, 0])
soft_y = label_class_to_softmax_output(y_label)
print(soft_y)

>>>

    [[1. 0.]
     [0. 1.]
     [1. 0.]
     [1. 0.]]

source on GitHub