Predictable t-SNE

t-SNE is not a transformer which can produce outputs for other inputs than the one used to train the transform. The proposed solution is train a predictor afterwards to try to use the results on some other inputs the model never saw.

t-SNE on MNIST

Let's reuse some part of the example of Manifold learning on handwritten digits: Locally Linear Embedding, Isomap….

Let's split into train and test.

Repeatable t-SNE

We use class PredictableTSNE but it works for other trainable transform too.

The difference now is that it can be applied on new data.

By default, the output data is normalized to get comparable results over multiple tries such as the loss computed between the normalized output of t-SNE and their approximation.

Repeatable t-SNE with another predictor

The predictor is a MLPRegressor.

Let's replace it with a KNeighborsRegressor and a normalizer StandardScaler.

The model seems to work better as the loss is better but as it is evaluated on the training dataset, it is just a way to check it is not too big.