*arrow_back*Back

## Error control in first-principles modelling

#### CECAM-HQ-EPFL, Lausanne, Switzerland & online (hybrid format)

#### Organisers

The determination of errors, and their annotation in the form of error bars, is widely established in the experimental branches of chemistry, physics, and material science. In contrast, a rigorous error analysis for first-principles materials simulations is lacking, and hence the results of such simulations are generally quoted without simulation-specific errors. We consider this a severe obstacle for innovation, given the important role that multiscale numerical simulations now play in chemical and physical research. In this workshop, we aim to bring researchers from different communities in contact, for an interdisciplinary discussion of challenges and opportunities in quantifying errors across simulation scales [1].

With the aim of advancing both models and numerical methods, errors for first-principles simulations have certainly been investigated. However, the state of the art is to rely mainly on benchmarking procedures or explicit convergence studies. Due to the associated cost, such studies are limited to a small number of cases and insufficient to estimate the error in high-throughput-based or data-driven approaches with easily millions of simulations. This is a severe obstacle in a large-scale screening context, since the differences in the quantities of interest can become small across a design space, such that an understanding of errors is crucial to reliably select the most relevant compounds [2].

Another issue is the reliability and efficiency of numerical procedures themselves. Focusing, for example, on the context of density-functional theory (DFT) simulations, a sizeable number of parameters need to be chosen, e.g., basis set, various tolerances, mixing, preconditioning. Usually heuristic approaches are employed to select these, which bears the risk to choose parameters (a) too tightly, such that calculations are inefficient, or (b) too loosely, such that calculations produce erroneous results or fail to converge. With the ability to estimate the different contributions of numerical error, error balancing strategies from other fields (such as finite-element modelling) could be applied to tune computational parameters automatically and robustly [3]. While the full error in DFT cannot yet be treated, recent progress, e.g. analytical approaches to estimate the *discretisation* or basis set error for several simplified models [3, 4] or statistical approaches such as BEEF to estimate the error of the DFT *model* [5] suggest this may soon become possible.

Similarly, the determination of errors in observables that are the results of long molecular dynamics simulations is rarely attempted [6]. The recent flurry of activity in data-driven interatomic potentials that extend the concept of first principles modelling beyond electronic structure calculations offers some hope. Error analysis of the approximations themselves, coupled with built-in uncertainty measures and the increased efficiency of computing the forces, opens up the possibility of propagating error analysis to physical observables.

Closely related to many of these goals is the field of uncertainty quantification (UQ), which has advanced quantitative understanding of the impact of parameter uncertainties and model uncertainties, and their interaction with numerical error, in many other disciplines. UQ has seen considerable application in engineering modeling, in geophysics, and in continuum/PDE modeling generally. But UQ at the molecular scale is less developed, notwithstanding recent efforts in applying Bayesian statistical methods to the learning of interatomic potentials [7]. A comprehensive UQ approach offers the potential of linking (i) numerical and modeling errors in electronic structure calculations, to (ii) the statistical inference of interatomic potentials, to (iii) the error in global/observable quantities. We expect that rigorous mathematical methods for multi-fidelity modeling, for goal-oriented sensitivity analysis and stochastic dimension reduction, and for intermingling deterministic error bounds with probabilistic descriptions of uncertainty will play essential roles in achieving this multiscale vision.

## References

**France**

Genevieve Dusson (CNRS & Université Bourgogne Franche-Comté) - Organiser & speaker

**Germany**

Michael Herbst (RWTH Aachen University) - Organiser

**United Kingdom**

Gabor Csanyi (University of Cambridge) - Organiser

**United States**

Youssef Marzouk (Massachusetts Institute of Technology) - Organiser