Workshop on calibration of numerical code


When & where

The workshop will take place on May 31, 2023, at Institut Henri Poincaré, Paris.

Room: amphithéâtre Hermite (ground floor).


Presentation of the workshop

The workshop aims at presenting recent advances in numerical code calibration and validation using experimental data, from fundamental results in a deterministic or probabilistic framework, to practical applications.

Organizers: Pierre Barbillon, Stéphanie Mahévas, Guillaume Perrin.


Registration

Registration is free but mandatory.


Schedule


Abstracts

Rui Paulo (Universidade de Lisboa) - Simultaneous calibration of a computer model and selection of active inputs in its discrepancy function
In the context of the classical Kennedy and O'Hagan (2001) calibration framework, which we briefly review, we propose a methodology to simultaneously calibrate a computer model and screen the associated discrepancy function for active inputs. The discrepancy function is the object that is introduced to account for model inadequacy when linking the computer model with field observations. We contend that addressing the issue of screening this function for active inputs is an important problem as it informs the modeler which are the inputs that are potentially being mishandled in the model, but also along which directions it may be less recommendable to use the model for prediction. By attacking this task from a fully-Bayesian perspective, which utilizes the joint posterior distribution of the discrepancy function and the vector of calibration parameters, we minimize the effects of the well-known confounding between these two unknowns. The methodology is inspired by the continuous spike and slab prior from popularized by the literature on Bayesian variable selection. In our approach, and in contrast with previous proposals, a single MCMC sample from the full model allows us to compute the posterior probabilities of all the competing models, resulting in a methodology that is computationally very fast. The approach hinges on the ability to obtain posterior inclusion probabilities of the inputs, which are easy to interpret quantities, as the basis for identifying active inputs. For that reason, we name the methodology PIPS — posterior inclusion probability screening.

Pietro Congedo (CMAP-INRIA) - Bayesian calibration of computer codes with Full Maximum a Posteriori (FMP) estimation of model error
We present a computer model calibration technique with model error inspired by the well-known Bayesian framework of Kennedy and O'Hagan. We introduce a functional dependency, that we call the Full Maximum a Posteriori (FMP) method, between model error hyperparameters and model parameters, leading to a flexible non-parametric distribution of model error, and allowing a more conservative representation of parameter uncertainty compared to traditional frameworks. This method eliminates the need for a true value of model parameters that caused identifiability issues in the KOH formulation. Solving this framework requires the use of sampling algorithms such as Markov Chain Monte-Carlo, where each step involves a hyperparameter estimation, which constitutes the bulk of the computational cost of the method. We propose multiple strategies to accelerate the estimation step. Secondly, we propose an adaptive sampling algorithm that allows building surrogates that are precise in the regions that are effectively visited in the chain.
We then illustrate some numerical examples to demonstrate the robustness of this approach.

Guillaume Damblin (CEA/DES) - Inverse uncertainty quantification of input model parameters in thermal-hydraulic simulation
Thermal-hydraulic numerical simulations are essential to study the behavior of existing and innovative nuclear power plants as well as to support safety demonstration regarding hypothetical accidental scenarios. Such simulations are established by making a trade-off between complexity and accuracy and, as a result, are affected by various types of uncertainty such as numerical, experimental and epistemic. In this presentation, we focus on the latter that is basically induced by both model simplifications and lack of knowledge on the physical phenomena. In this context, we focus on inverse statistical methods based on comparisons between simulations and experimental data, which can be carried out to quantify both parameter and model uncertainties as probability distributions. After characterizing these sources of uncertainty in the framework of nuclear thermal-hydraulic simulations, several recent contributions dealing with non-linear inverse problems, identifiability and dependence on experimental setups will be presented.

Hilaire Drouineau (INRAE) - Calibration of a complex population dynamics model: from its development to results validation
The use of population dynamics models, especially to support decision making, generally requires to fit the model ot observed data in order to calibrate many unknown parameters. This also contribute to increase the confidence of stakeholder in the tool, by demonstrating its ability to mimic past trajectories. The large number of unknown parameters and the high computational time require a rigorous calibration process from the development of the model itself to the exploration and validation of the results. This presentation will show the different steps and loops that were required to calibrate a model on the European eel, and will explain the tools that the Mexico network is trying to develop to facilitate choices in the calibration process and to disseminate the solutions implemented.

Julien Waeytens (UGE) - Global versus Goal-oriented model updating techniques in deterministic settings - Application to urban issues
In the 21st century, many environmental and societal challenges have to be faced: climate change, energy efficiency, air pollution, water scarcity, biodiversity loss, etc… To tackle these issues, numerical methods and the city digital twin are promising help-to-decision tools for local authorities. In fact, inverse techniques combining physical modelling and sensors outputs can be used in city applications for the prediction of physical fields (ex: city air quality map), for the detection of urban anomalies (ex : leaks in drinking water newtork) and for the design of efficient urban planning strategies (ex : smart placement of depolluting panels in urban areas). The number of model parameters being significant while few sensors are deployed in city applications, the inverse problems to be solved are generally ill-posed. To address this problem, I will present a goal-oriented model updating method in a deterministic setting. Contrary to standard global model updating techniques, the objective is to predict accurately a chosen quantity of interest by calibrating a limited number of model parameters. This approach embeds sensitivity analysis through adjoint framework. The method will be notably illustrated on thermal building applications with real sensor outputs from a building of the Sense-City equipment.

Marine Dumon (UGE) - Bayesian calibration of sensors for air and water pollution monitoring
Recently, nanomaterial-based sensors have become a solution for monitoring air and water pollution. However, they can be difficult to calibrate due to their high sensitivity in uncontrolled environment. To model the calibration law, the solution of an inverse problem in a Bayesian framework is conducted. The key point of this formalism is the choice of inputs and outputs of the model. This study is done in two steps : first a calibration step to obtain a predictive model and then an inversion phase of the model to obtain a prediction of the pollutant concentrations. The Bayesian calibration allows to take into account all the possible uncertainties, on the one hand by adding a model error and on the other hand by considering the uncertainties on the input data (of the reference measurement instruments) and the output data (of the sensors). The approach will first be tested on simulated data in order to validate its viability and performance, and then it will be applied to data from sensors deployed in real environments.

Adama Barry (IFPEN) - Design of experiments for the calibration of calculation codes
The design of physical and digital experiments to solve a calibration problem is crucial in a context where the physical phenomenon is modeled by a costly computational code and where the physical experiments are also costly. We follow the classical Bayesian framework of Kennedy and O'Hagan (2001) to propose an algorithm for designing physical and digital experiments. A first step is to design the physical experiments, so we will begin by presenting criteria for measuring the quality of a physical experiment. These criteria can be grouped into two categories: those based on the information matrix from the literature and those based on the exact posterior distribution. The latter are better suited to the calibration problem because they take into account the uncertainty of the physical phenomenon and the calibration parameters. However, they are expensive to evaluate due to the use of Monte-Carlo procedures. The first challenge is the quick evaluation of these criteria, and the second is the solution of the optimization problem that follows. For the first, we will present a quick calculation method without a Monte-Carlo procedure and a variant of a simulated annealing algorithm for the second. These criteria will be combined with a design algorithm for numerical experiments from the literature to give a mixed design algorithm for the calibration of computational codes. Numerical experiments on a toy case will be presented to appreciate and compare the performance of the algorithms.

Katarina Radišić (INRAE) - Calibrating a hydrological model robustly to rain perturbations with stochastic surrogates
Misspecifying external forcings (such as rain) on a hydrological model can directly affect subsequent parameter calibrations. Indeed, by using classical calibration methods, the error in the external forcings is propagated to the model output, and then, if not treated correctly, this error is compensated by overcalibrating the model parameters. As a consequence, parameter values that were found optimal for one value of the external forcings, are not granted to be optimal for another one. Thus, the aim of robust calibration is to propose parameter estimators which are satisfactory over a large set of values of the external forcings. We present the robust calibration of the PESHMELBA distributed, process based, hydrological pesticide transfer model used for the simulation of pesticide fate on small agricultural catchments.