ML.NET 0.8 Release Notes

Today we are excited to release ML.NET 0.8 and we can finally explain why it is the best version so far! This release enables model explainability to understand which features (inputs) are most important, improved debuggability, easier to use time series predictions, several API improvements, a new recommendation use case, and more.


ML.NET supports Windows, MacOS, and Linux. See supported OS versions of .NET Core 2.0 for more details.

You can install ML.NET NuGet from the CLI using:

dotnet add package Microsoft.ML

From package manager:

Install-Package Microsoft.ML

Release Notes

Below are some of the highlights from this release.

  • Added first steps towards model explainability (#1735, #1692)

    • Enabled explainability in the form of overall feature importance and generalized additive models.
    • Overall feature importance gives a sense of which features are overall most important for the model. For example, when predicting the sentiment of a tweet, the presence of “amazing” might be more important than whether the tweet contains “bird”. This is enabled through Permutation Feature Importance. Example usage can be found here.
    • Generalized Additive Models have very explainable predictions. They are similar to linear models in terms of ease of understanding but are more flexible and can have better performance. Example usage can be found here.
  • Improved debuggability by previewing IDataViews (#1518)

    • It is often useful to peek at the data that is read into an ML.NET pipeline and even look at it after some intermediate steps to ensure the data is transformed as expected.
    • You can now preview an IDataView by going to the Watch window in the VS debugger, entering a variable name you want to preview and calling its Preview() method.

  • Enabled a stateful prediction engine for time series problems (#1727)

    • ML.NET 0.7 enabled anomaly detection scenarios. However, the prediction engine was stateless, which means that every time you want to figure out whether the latest data point is anomolous, you need to provide historical data as well. This is unnatural.
    • The prediction engine can now keep state of time series data seen so far, so you can now get predictions by just providing the latest data point. This is enabled by using CreateTimeSeriesPredictionFunction instead of MakePredictionFunction. Example usage can be found here. You’ll need to add the Microsoft.ML.TimeSeries NuGet to your project.
  • Improved support for recommendation scenarios with implicit feedback (#1664)

    • ML.NET 0.7 included Matrix Factorization which enables using ratings provided by users to recommend other items they might like.
    • In some cases, you don’t have specific ratings from users but only implicit feedback (e.g. they watched the movie but didn’t rate it).
    • Matrix Factorization in ML.NET can now use this type of implicit data to train models for recommendation scenarios.
    • Example usage can be found here. You’ll need to add the Microsoft.ML.MatrixFactorization NuGet to your project.
  • Enabled saving and loading data as a binary file (IDataView/IDV) (#1678)

    • It is sometimes useful to save data after it has been transformed. For example, you might have featurized all the text into sparse vectors and want to perform repeated experimentation with different trainers without continuously repeating the data transformation.
    • Saving and loading files in ML.NET’s binary format can help efficiency as it is compressed and already schematized.
    • Reading a binary data file can be done using mlContext.Data.ReadFromBinary("pathToFile") and writing a binary data file can be done using mlContext.Data.SaveAsBinary("pathToFile").
  • Added filtering and caching APIs (#1569)

    • There is sometimes a need to filter the data used for training a model. For example, you need to remove rows that don’t have a label, or focus your model on certain categories of inputs. This can now be done with additional filters as shown here.
    • Some estimators iterate over the data multiple times. Instead of always reading from file, you can choose to cache the data to potentially speed things up. An example can be found here.


Shoutout to jwood803, feiyun0112, bojanmisic, rantri, Caraul, van-tienhoang, Thomas-S-B, and the ML.NET team for their contributions as part of this release!