.. _05measuringpredictionperformancerst: ============================================ 2A.ML101.5: Measuring prediction performance ============================================ .. only:: html **Links:** :download:`notebook <05_measuring_prediction_performance.ipynb>`, :downloadlink:`html <05_measuring_prediction_performance2html.html>`, :download:`python <05_measuring_prediction_performance.py>`, :downloadlink:`slides <05_measuring_prediction_performance.slides.html>`, :githublink:`GitHub|_doc/notebooks/sklearn_ensae_course/05_measuring_prediction_performance.ipynb|*` *Source:* `Course on machine learning with scikit-learn `__ by Gaël Varoquaux Using the K-neighbors classifier -------------------------------- Here we’ll continue to look at the digits data, but we’ll switch to the K-Neighbors classifier. The K-neighbors classifier is an instance-based classifier. The K-neighbors classifier predicts the label of an unknown point based on the labels of the *K* nearest points in the parameter space. .. code:: ipython3 # Get the data from sklearn.datasets import load_digits digits = load_digits() X = digits.data y = digits.target .. code:: ipython3 # Instantiate and train the classifier from sklearn.neighbors import KNeighborsClassifier clf = KNeighborsClassifier(n_neighbors=1) clf.fit(X, y) .. raw:: html
KNeighborsClassifier(n_neighbors=1)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
.. code:: ipython3 # Check the results using metrics from sklearn import metrics y_pred = clf.predict(X) .. code:: ipython3 print(metrics.confusion_matrix(y_pred, y)) .. parsed-literal:: [[178 0 0 0 0 0 0 0 0 0] [ 0 182 0 0 0 0 0 0 0 0] [ 0 0 177 0 0 0 0 0 0 0] [ 0 0 0 183 0 0 0 0 0 0] [ 0 0 0 0 181 0 0 0 0 0] [ 0 0 0 0 0 182 0 0 0 0] [ 0 0 0 0 0 0 181 0 0 0] [ 0 0 0 0 0 0 0 179 0 0] [ 0 0 0 0 0 0 0 0 174 0] [ 0 0 0 0 0 0 0 0 0 180]] Apparently, we’ve found a perfect classifier! But this is misleading for the reasons we saw before: the classifier essentially “memorizes” all the samples it has already seen. To really test how well this algorithm does, we need to try some samples it *hasn’t* yet seen. This problem can also occur with regression models. In the following we fit an other instance-based model named “decision tree” to the Diabete dataset we introduced previously: .. code:: ipython3 %matplotlib inline from matplotlib import pyplot as plt import numpy as np .. code:: ipython3 from sklearn.datasets import load_diabetes from sklearn.tree import DecisionTreeRegressor data = load_diabetes() clf = DecisionTreeRegressor().fit(data.data, data.target) predicted = clf.predict(data.data) expected = data.target plt.scatter(expected, predicted) plt.plot([0, 350], [0, 350], '--k') plt.axis('tight') plt.xlabel('True Progression') plt.ylabel('Predicted Progression'); .. image:: 05_measuring_prediction_performance_10_0.png Here again the predictions are seemingly perfect as the model was able to perfectly memorize the training set. A Better Approach: Using a validation set ----------------------------------------- Learning the parameters of a prediction function and testing it on the same data is a methodological mistake: a model that would just repeat the labels of the samples that it has just seen would have a perfect score but would fail to predict anything useful on yet-unseen data. To avoid over-fitting, we have to define two different sets: - a training set X_train, y_train which is used for learning the parameters of a predictive model - a testing set X_test, y_test which is used for evaluating the fitted predictive model In scikit-learn such a random split can be quickly computed with the ``train_test_split`` helper function. It can be used this way: .. code:: ipython3 from sklearn.model_selection import train_test_split X = digits.data y = digits.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0) print("%r, %r, %r" % (X.shape, X_train.shape, X_test.shape)) .. parsed-literal:: (1797, 64), (1347, 64), (450, 64) Now we train on the training data, and test on the testing data: .. code:: ipython3 clf = KNeighborsClassifier(n_neighbors=1).fit(X_train, y_train) y_pred = clf.predict(X_test) .. code:: ipython3 print(metrics.confusion_matrix(y_test, y_pred)) .. parsed-literal:: [[37 0 0 0 0 0 0 0 0 0] [ 0 43 0 0 0 0 0 0 0 0] [ 0 0 43 1 0 0 0 0 0 0] [ 0 0 0 45 0 0 0 0 0 0] [ 0 0 0 0 38 0 0 0 0 0] [ 0 0 0 0 0 47 0 0 0 1] [ 0 0 0 0 0 0 52 0 0 0] [ 0 0 0 0 0 0 0 48 0 0] [ 0 0 0 0 0 0 0 0 48 0] [ 0 0 0 1 0 1 0 0 0 45]] .. code:: ipython3 print(metrics.classification_report(y_test, y_pred)) .. parsed-literal:: precision recall f1-score support 0 1.00 1.00 1.00 37 1 1.00 1.00 1.00 43 2 1.00 0.98 0.99 44 3 0.96 1.00 0.98 45 4 1.00 1.00 1.00 38 5 0.98 0.98 0.98 48 6 1.00 1.00 1.00 52 7 1.00 1.00 1.00 48 8 1.00 1.00 1.00 48 9 0.98 0.96 0.97 47 accuracy 0.99 450 macro avg 0.99 0.99 0.99 450 weighted avg 0.99 0.99 0.99 450 The averaged f1-score is often used as a convenient measure of the overall performance of an algorithm. It appears in the bottom row of the classification report; it can also be accessed directly: .. code:: ipython3 metrics.f1_score(y_test, y_pred, average="macro") .. parsed-literal:: 0.9913675218842191 The over-fitting we saw previously can be quantified by computing the f1-score on the training data itself: .. code:: ipython3 metrics.f1_score(y_train, clf.predict(X_train), average="macro") .. parsed-literal:: 1.0 **Regression metrics** In the case of regression models, we need to use different metrics, such as explained variance. Application: Model Selection via Validation ------------------------------------------- In the previous notebook, we saw Gaussian Naive Bayes classification of the digits. Here we saw K-neighbors classification of the digits. We’ve also seen support vector machine classification of digits. Now that we have these validation tools in place, we can ask quantitatively which of the three estimators works best for the digits dataset. - With the default hyper-parameters for each estimator, which gives the best f1 score on the **validation set**? Recall that hyperparameters are the parameters set when you instantiate the classifier: for example, the ``n_neighbors`` in :: clf = KNeighborsClassifier(n_neighbors=1) - For each classifier, which value for the hyperparameters gives the best results for the digits data? For ``LinearSVC``, use ``loss='l2'`` and ``loss='l1'``. For ``KNeighborsClassifier`` we use ``n_neighbors`` between 1 and 10. Note that ``GaussianNB`` does not have any adjustable hyperparameters. .. code:: ipython3 from sklearn.svm import LinearSVC from sklearn.naive_bayes import GaussianNB from sklearn.neighbors import KNeighborsClassifier import warnings # suppress warnings from older versions of KNeighbors warnings.filterwarnings('ignore', message='kneighbors*') X = digits.data y = digits.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0) for Model in [LinearSVC, GaussianNB, KNeighborsClassifier]: clf = Model().fit(X_train, y_train) y_pred = clf.predict(X_test) print(Model.__name__, metrics.f1_score(y_test, y_pred, average="macro")) print('------------------') # test SVC loss for loss, p, dual in [('squared_hinge', 'l1', False), ('squared_hinge', 'l2', True)]: clf = LinearSVC(penalty=p, loss=loss, dual=dual) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print("LinearSVC(penalty='{0}', loss='{1}')".format(p, loss), metrics.f1_score(y_test, y_pred, average="macro")) print('-------------------') # test K-neighbors for n_neighbors in range(1, 11): clf = KNeighborsClassifier(n_neighbors=n_neighbors).fit(X_train, y_train) y_pred = clf.predict(X_test) print("KNeighbors(n_neighbors={0})".format(n_neighbors), metrics.f1_score(y_test, y_pred, average="macro")) .. parsed-literal:: C:\xavierdupre\__home_\github_fork\scikit-learn\sklearn\svm\_base.py:1244: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations. warnings.warn( .. parsed-literal:: LinearSVC 0.9257041879239652 GaussianNB 0.8332741681010101 KNeighborsClassifier 0.9804562804949924 ------------------ .. parsed-literal:: C:\xavierdupre\__home_\github_fork\scikit-learn\sklearn\svm\_base.py:1244: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations. warnings.warn( C:\xavierdupre\__home_\github_fork\scikit-learn\sklearn\svm\_base.py:1244: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations. warnings.warn( .. parsed-literal:: LinearSVC(penalty='l1', loss='squared_hinge') 0.9447242283258508 LinearSVC(penalty='l2', loss='squared_hinge') 0.9385749925598466 ------------------- KNeighbors(n_neighbors=1) 0.9913675218842191 KNeighbors(n_neighbors=2) 0.9848442068835102 KNeighbors(n_neighbors=3) 0.9867753449543099 KNeighbors(n_neighbors=4) 0.9803719053818863 KNeighbors(n_neighbors=5) 0.9804562804949924 KNeighbors(n_neighbors=6) 0.9757924194139573 KNeighbors(n_neighbors=7) 0.9780645792142071 KNeighbors(n_neighbors=8) 0.9780645792142071 KNeighbors(n_neighbors=9) 0.9780645792142071 KNeighbors(n_neighbors=10) 0.9755550897728812 Cross-validation ---------------- Cross-validation consists in repetively splitting the data in pairs of train and test sets, called ‘folds’. Scikit-learn comes with a function to automatically compute score on all these folds. Here we do ‘K-fold’ with k=5. .. code:: ipython3 clf = KNeighborsClassifier() from sklearn.model_selection import cross_val_score cross_val_score(clf, X, y, cv=5) .. parsed-literal:: array([0.94722222, 0.95555556, 0.96657382, 0.98050139, 0.9637883 ]) We can use different splitting strategies, such as random splitting .. code:: ipython3 from sklearn.model_selection import ShuffleSplit cv = ShuffleSplit(n_splits=5) cross_val_score(clf, X, y, cv=cv) .. parsed-literal:: array([0.98333333, 0.98333333, 0.98888889, 0.98333333, 1. ]) There exists many different cross-validation strategies in scikit-learn. They are often useful to take in account non iid datasets. Hyperparameter optimization with cross-validation ------------------------------------------------- Consider regularized linear models, such as *Ridge Regression*, which uses :math:`\ell_2` regularlization, and *Lasso Regression*, which uses :math:`\ell_1` regularization. Choosing their regularization parameter is important. Let us set these paramaters on the Diabetes dataset, a simple regression problem. The diabetes data consists of 10 physiological variables (age, sex, weight, blood pressure) measure on 442 patients, and an indication of disease progression after one year: .. code:: ipython3 from sklearn.datasets import load_diabetes data = load_diabetes() X, y = data.data, data.target print(X.shape) .. parsed-literal:: (442, 10) With the default hyper-parameters: we use the cross-validation score to determine goodness-of-fit: .. code:: ipython3 from sklearn.linear_model import Ridge, Lasso for Model in [Ridge, Lasso]: model = Model() print(Model.__name__, cross_val_score(model, X, y).mean()) .. parsed-literal:: Ridge 0.410174971340889 Lasso 0.3375593674654274 Basic Hyperparameter Optimization ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We compute the cross-validation score as a function of alpha, the strength of the regularization for Lasso and Ridge. We choose 20 values of alpha between 0.0001 and 1: .. code:: ipython3 alphas = np.logspace(-3, -1, 30) for Model in [Lasso, Ridge]: scores = [cross_val_score(Model(alpha), X, y, cv=3).mean() for alpha in alphas] plt.plot(alphas, scores, label=Model.__name__) plt.legend(loc='lower left'); .. image:: 05_measuring_prediction_performance_40_0.png Can we trust our results to be actually useful? Automatically Performing Grid Search ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code:: ipython3 from sklearn.model_selection import GridSearchCV ``GridSearchCV`` is constructed with an estimator, as well as a dictionary of parameter values to be searched. We can find the optimal parameters this way: .. code:: ipython3 for Model in [Ridge, Lasso]: gscv = GridSearchCV(Model(), dict(alpha=alphas), cv=3).fit(X, y) print(Model.__name__, gscv.best_params_) .. parsed-literal:: Ridge {'alpha': 0.06210169418915616} Lasso {'alpha': 0.01268961003167922} Built-in Hyperparameter Search ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For some models within scikit-learn, cross-validation can be performed more efficiently on large datasets. In this case, a cross-validated version of the particular model is included. The cross-validated versions of ``Ridge`` and ``Lasso`` are ``RidgeCV`` and ``LassoCV``, respectively. The grid search on these estimators can be performed as follows: .. code:: ipython3 from sklearn.linear_model import RidgeCV, LassoCV for Model in [RidgeCV, LassoCV]: model = Model(alphas=alphas, cv=3).fit(X, y) print(Model.__name__, model.alpha_) .. parsed-literal:: RidgeCV 0.06210169418915616 LassoCV 0.01268961003167922 We see that the results match those returned by GridSearchCV Nested cross-validation ~~~~~~~~~~~~~~~~~~~~~~~ How do we measure the performance of these estimators? We have used data to set the hyperparameters, so we need to test on actually new data. We can do this by running ``cross_val_score`` on our CV objects. Here there are 2 cross-validation loops going on, this is called ‘nested cross validation’: .. code:: ipython3 for Model in [RidgeCV, LassoCV]: scores = cross_val_score(Model(alphas=alphas, cv=3), X, y, cv=3) print(Model.__name__, np.mean(scores)) .. parsed-literal:: RidgeCV 0.48916033973224776 LassoCV 0.4854908670556423 Note that these results do not match the best results of our curves above, and ``LassoCV`` seems to under-perform ``RidgeCV``. The reason is that setting the hyper-parameter is harder for Lasso, thus the estimation error on this hyper-parameter is larger.