blog page - 1/1¶
Faster Polynomial Features¶
The current implementation of
in scikit-learn computes each new feature
independently and that increases the number of
data exchanged between numpy and Python.
The idea of the implementation in
is to reduce this number by brodcast multiplications.
The second optimization occurs by transposing the matrix:
dense matrix are organized by rows in memory so
it is faster to mulitply two rows than two columns.
See Faster Polynomial Features.
Piecewise Linear Regression¶
t-SNE is quite an interesting tool to
visualize data on a map but it has one drawback:
results are not reproducible. It is much more powerful
than a PCA but the results is difficult to
interpret. Based on some experiment, if t-SNE
manages to separate classes, there is a good chance that
a classifier can get good performances. Anyhow, I implemented
a regressor which approximates the t-SNE outputs
so that it can be used as features for a further classifier.
I create a notebook Predictable t-SNE and a new tranform
scikit-learn introduced nice feature to be able to process mixed type column in a single pipeline which follows scikit-learn API: sklearn.compose.ColumnTransformer or sklearn.pipeline.FeatureUnion and sklearn.pipeline.Pipeline. Ideas are not new but it is finally taking place in scikit-learn.
Quantile regression with scikit-learn.¶
scikit-learn does not have any quantile regression.
:epkg:`statsmodels` does have one
but I wanted to try something I did for my teachings
based on Iteratively reweighted least squares.
I thought it was a good case study to turn a simple algorithm into
a learner scikit-learn can reused in a pipeline.
The notebook Quantile Regression demonstrates it
and it is implemented in
Function to get insights on machine learned models¶
Machine learned models are black boxes. The module tries to implements some functions to get insights on machine learned models.