Working meeting "Uncertainty quantification and machine learning" - March, 10th 2020

Seminar organized by the GdR MASCOT-NUM

Organizers: Sébastien Da Veiga, Bertrand Iooss, Anthony Nouy, Guillaume Perrin, Victor Picheny

It will take place on:

March 10, 2020, at Amphithéâtre Hermite, Institut Henri Poincaré, Paris.

Presentation of the workshop

Recent advances which emerged in parallel in the machine learning and UQ communities show that they could both benefit even further from joint research efforts and practice sharing. In particular, the trending issues in machine learning related to nonlinear approximation theory, uncertainty prediction and explainability of black-box models have deep links with some methodologies explored in the UQ community. The goal of this workshop is to investigate these links and gather a diverse audience from both research areas to facilitate future collaborations between them.



Agenda

9h15 - Welcome - Introduction

Theme 1: Nonlinear Approximation - Deep learning

9h30 - Anthony Nouy (Ecole Centrale de Nantes): Learning with deep tensor networks

10h15 - TBA

11h00 - Break

Theme 2: Explainability and interpretability

11h30 - Jean-Michel Loubès (IMT, ANITi): Global explanation of machine learning with sensitivity analysis

12h15 - Lunch break (on your own)

14h30 - Christophe Labreuche (Thalès): Interpretability methods in AI and a comparison with sensitivity analysis

Theme 3: Prediction's uncertainty

15h15 - Nicolas Brosse (Thalès): Uncertainties for classification tasks in Deep Neural Networks: a last layer approach

16h00 - Break

16h15 - Sébastien Da Veiga (Safran Tech): Sampling posteriors in high dimension: potential industrial applications with UQ

17h00 - End


Abstracts

TBA

TBA

TBA

Interpretability and sensibility analysis methods are originated from different community (AI for the first one, and statistics for the second one) and aim at different goals (local analysis for the first one and global analysis for the second one). However, we can draw interesting connections between them. Prior to that, we will start by describing some challenges in a large class of interpretability methods called Feature Attribution.

Feature Attribution aims at allocating the level of influence of each feature to the output of the AI model. The Shapley value of one of the leading concepts for feature attribution. Its benefit is its axiomatic justification in Cooperative Game Theory. It has been adapted to different fields of AI. In Machine Learning, the difficulty is to take into account the dependencies among featutes. In Decision Aiding, the features are often organized in a hierarchical way and the standard Shapley value is not suitable.

We will show interesting connections between Feature Attribution methods and sensitivity analysis. Under some assumptions, the Sobol indices correspond to a concept which is a variant of the Shapley values.

TBA

TBA

CNRS