.. image:: pystat.png :height: 20 :alt: Statistique :target: http://www.xavierdupre.fr/app/ensae_teaching_cs/helpsphinx/td_2a_notions.html#pour-un-profil-plutot-data-scientist Apprentissage sans labels +++++++++++++++++++++++++ .. toctree:: :maxdepth: 2 ../specials/nolabel *Notebooks* *(à venir)* *Lectures* *Autoencoders - réduction de dimensionnalité* * `Why Does Unsupervised Pre-training Help Deep Learning? `_ * `Autoencoders `_ * `Autoencoders, Unsupervised Learning, and Deep Architectures `_ * `Generative Models `_, `Adversarial Autoencoders `_ * `Tutorial on Variational Autoencoders `_, `Denoising Autoencoders (dA) `_ * `Generative Adversarial Networks `_, `NIPS 2016 Tutorial: Generative Adversarial Networks `_ * `Adversarial Autoencoders `_ * `Adversarial Autoencoders (with Pytorch) `_ * `Marginalizing Stacked Linear Denoising Autoencoders `_ * `What Regularized Auto-Encoders Learn from the Data-Generating Distribution `_ * `Compressed sensing and single-pixel cameras `_ * `Multi-Label Prediction via Compressed Sensing `_ * `Inference in generative models using the Wasserstein distance `_, `Coupling of Particle Filters `_ * `Auto-Encoding Variational Bayes `_ *No label, weak labels* * `Unsupervised Supervised Learning I: Estimating Classification and Regression Errors without Labels `_ * `Unsupervised Supervised Learning II: Margin-Based Classification without Labels `_, `Unsupervised Supervised Learning II: Margin-Based Classification Without Labels `_ (longer version) * `Large-scale Multi-label Learning with Missing Labels `_ * `Reducing Label Complexity by Learning From Bags `_ * `Learning from Corrupted Binary Labels via Class-Probability Estimation `_ * `Generalized Expectation Criteria for Semi-Supervised Learning with Weakly Labeled Data `_ * `Multitask Learning without Label Correspondences `_ * `Training Highly Multiclass Classifiers `_ *Online training* * `Online Incremental Feature Learning with Denoising Autoencoders `_ * `Fast Kernel Classifiers with Online and Active Learning `_, `A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data `_ * `Multi Kernel Learning with Online-Batch Optimization `_ *Improving training set* * `Data Programming: Creating Large Training Sets, Quickly `_ * `Foolbox is a Python toolbox to create adversarial examples that fool neural networks. `_ *Adversarial Examples* * `The Limitations of Deep Learning in Adversarial Settings `_ : l'article montre des limites de l'approche deep learning en construisant des exemples proches des exemples initiaux mais qui font dérailler le modèle.