module metrics.scoring_metrics#

Short summary#

module mlinsights.metrics.scoring_metrics

Metrics to compare machine learning.

source on GitHub

Functions#

function

truncated documentation

comparable_metric

Applies function on either the true target or/and the predictions before computing r2 score.

r2_score_comparable

Applies function on either the true target or/and the predictions before computing r2 score.

Documentation#

Metrics to compare machine learning.

source on GitHub

mlinsights.metrics.scoring_metrics.comparable_metric(metric_function, y_true, y_pred, tr='log', inv_tr='exp', **kwargs)#

Applies function on either the true target or/and the predictions before computing r2 score.

Parameters:
  • metric_function – metric to compute

  • y_true – expected targets

  • y_pred – predictions

  • sample_weight – weights

  • multioutput – see sklearn.metrics.r2_score

  • tr – transformation applied on the target

  • inv_tr – transformation applied on the predictions

Returns:

results

source on GitHub

mlinsights.metrics.scoring_metrics.r2_score_comparable(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average', tr=None, inv_tr=None)#

Applies function on either the true target or/and the predictions before computing r2 score.

Parameters:
  • y_true – expected targets

  • y_pred – predictions

  • sample_weight – weights

  • multioutput – see sklearn.metrics.r2_score

  • tr – transformation applied on the target

  • inv_tr – transformation applied on the predictions

Returns:

results

Example:

<<<

import numpy
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
from mlinsights.metrics import r2_score_comparable

iris = datasets.load_iris()
X = iris.data[:, :4]
y = iris.target + 1

X_train, X_test, y_train, y_test = train_test_split(X, y)

model1 = LinearRegression().fit(X_train, y_train)
print('r2', r2_score(y_test, model1.predict(X_test)))
print('r2 log', r2_score(numpy.log(y_test), numpy.log(model1.predict(X_test))))
print('r2 log comparable', r2_score_comparable(
    y_test, model1.predict(X_test), tr="log", inv_tr="log"))

model2 = LinearRegression().fit(X_train, numpy.log(y_train))
print('r2', r2_score(numpy.log(y_test), model2.predict(X_test)))
print('r2 log', r2_score(y_test, numpy.exp(model2.predict(X_test))))
print('r2 log comparable', r2_score_comparable(
    y_test, model2.predict(X_test), inv_tr="exp"))

>>>

    r2 0.9490470669963555
    r2 log 0.9573916489895753
    r2 log comparable 0.9573916489895753
    r2 0.9753695690350526
    r2 log 0.9584447251219507
    r2 log comparable 0.9584447251219507

source on GitHub