Washington, DC

slideshow 1



Multifidelity modeling in support

of design and uncertainty quantification

Multifidelity Modeling (MFM) Workshop

6 June, 2021 - Washington, DC

Multifidelity modeling encompasses a broad range of methods that use approximate models together with high-fidelity models to accelerate a computational task that requires repeated model evaluations. This workshop will highlight the tremendous recent progress of multifidelity methods for design optimization and uncertainty quantification, including (but not limited to) methods based on adaptive sampling, control variate formulations, importance sampling, trust region model management, model fusion, and Bayesian optimization. The focus is on a tutorial-style series of lectures aimed at the practitioner, together with forward-looking discussions of challenges and opportunities. The workshop will include the following key discussion topics:
1) multifidelity formulations that combine computational models with other sources of information, such as experimental data and expert opinion;

2) exploiting the connections between multifidelity modeling and machine learning methods;

3) past successes of applying multifidelity modeling in aircraft design, structural modeling, and other fields;

4) future opportunities in areas such as material design and autonomous systems.


1) Dissemination of recent methods developments to the MDO practitioner community.

2) Discussion of challenges and opportunities to identify new collaborations and new research directions.


  • Early AIAA Member Rate: $190
  • Standard AIAA Member Rate: $290
  • Non-Member Rate: $390

Register via AIAA Aviation website, here:


Course Hours:  1 Day duration | 08:00–17:00 hrs

Preliminary agenda:

08:15–08:30  Welcome Laura Mainini (UTRC)

08:30–09:45 Tutorial/Lecture 1: Matthias Poloczek (UBER)

Scalable Bayesian optimization for high dimensional expensive functions

Bayesian optimization has recently emerged as a powerful method for the sample-efficient optimization of expensive black-box functions. These functions do not have a closed-form and are evaluated for example by running a complex simulation, a lab experiment, or solving a PDE. Use cases arise in machine learning, e.g., when optimizing a reinforcement learning policy; examples in engineering include the design of aerodynamic structures or searching for better materials. However, the application of Bayesian optimization to high-dimensional problems remains challenging, and on difficult problems, Bayesian optimization is often not competitive with other paradigms. In the first part of the talk I will give a self-contained introduction to Bayesian optimization. Then I will present novel algorithms that overcome the previous limitations of Bayesian optimization and set a new state-of-the-art performance for high-dimensional problems.

Based on joint work with Alexander Munteanu and Amin Nayebi presented at ICML 2019
and on joint work with David Eriksson, Michael Pearce, Jake Gardner, Ryan Turner that appeared in the Proc. of NeurIPS 2019.

09:45–11:00 Tutorial/Lecure 2: Nathalie Bartoli (ONERA/DTIS, Université de Toulouse)

Bayesian optimization via multi-fidelity surrogate modeling – method and practice

In a context of optimization with multiple information sources with varying degrees of fidelity, with varying associated accuracy and querying costs, we propose to formulate a multi-fidelity
extension for Efficient Global Optimization. An illustration of the method will be done in 1D and some airfoil shape optimization results using both a RANS solver and a low fidelity approximation will be presented. For a pratical use, a Jupyter Python Notebook based on the open source toolbox SMT will be proposed on different academic problems. https://github.com/SMTorg/smt
Based on joint work with Thierry Lefebvre (ONERA), Mostafa Meliani and Joseph Morlier (ISAE-SUPAERO) and on collaboration with University of Michigan (J. R.R.A. Martins, M.-A.Bouhlel).

11:00–11:15 Coffee break

11:15–12:30 Tutorial/Lecture 3: Benjamin Peherstorfer (NYU)

Multifidelity Uncertainty Quantification

Uncertainty quantification with sampling-based methods such as Monte Carlo can require a large number of numerical simulations of models describing the systems of interest to obtain estimates with acceptable accuracies. Thus, if a computationally expensive high-fidelity model is used alone, Monte-Carlo-based uncertainty quantification methods quickly become intractable. In this tutorial presentation, we survey recent advances in multifidelity methods for sampling-based uncertainty quantification. The goal of the multifidelity methods that we discuss is to significantly speedup uncertainty quantification by leveraging low-cost low-fidelity models while establishing accuracy guarantees and unbiasedness via occasional recourse to the expensive high-fidelity models. We survey methods for (a) uncertainty propagation, (b) rare event simulation, (c) sensitivity analysis, and (d) Bayesian inverse problems. If time permits, we will (e) give an outlook to context-aware learning of data-driven low-fidelity models, where models are learned explicitly for improving the performance of multifidelity computations rather than providing accurate approximations of high-fidelity models.

Links to implementations of multifidelity methods: https://cims.nyu.edu/~pehersto/code.html

Multifidelity Monte Carlo implementation in Matlab: https://github.com/pehersto/mfmc

12:30–13:45 Lunch break

13:45–14:45 Tutorial/Lecture 4: Phil Beran (AFRL)


14:45–15:45 Tutorial/Lecture 5: more information to come soon


15:45–16:00 Coffee break

16:00–17:00 Round Table and wrap up: more information to come soon




The workshop is organized by the AIAA Multidisciplinary Design Optimization Technical Committee (AIAA MDO TC)

Organizing team:

Website support:

  • Nadine Barriety (ONERA, France)