Uncertainty quantification in atomistic modeling: From uncertainty-aware density functional theory to machine learning
Location: CECAM-HQ-EPFL, Lausanne, Switzerland
Organisers
[Detailed application instructions available HERE]
Uncertainty quantification (UQ) is a standard, widespread practice in experimental sciences. However, rigorous uncertainty analysis in atomistic modeling—from density functional theory (DFT) calculations to machine learning (ML) models trained on DFT results—is relatively underdeveloped, leading to frequent scientific outcomes in the field that lack any uncertainty or error quantification. This poses a significant challenge for innovation and progress in materials science, especially given the crucial role of multiscale numerical simulations in the contemporary research landscape.
Our workshop first encompasses UQ in DFT calculations, which are later inherited by ML models trained on DFT data. Numerical parameters such as basis sets sizes, energy tolerances, convergence criteria, and many other preconditioning parameters need careful selection. Typically, these parameters are chosen heuristically, especially in high-throughput contexts, which can lead to inconsistent and unsystematic errors that make it challenging to compare data. Error balancing strategies can improve parameter tuning in DFT simulations [1-5], but comprehensive error bounds for generic chemistry codes and fully integrated models remain lacking.
Our workshop also spans UQ for atomistic ML. Atomistic ML has extended materials modeling at ab initio accuracy beyond the conventionally accessible length and time scales. ML models are intrinsically statiscal, however, and UQ is essential in their usage. Various UQ methods have been devised to allow atomistic ML models to be deployed with uncertainty estimates [6, 16], which enables error propagation all the way up to the physical observables [10]. Although UQ for deep neural network (NN)-based models pose a greater challenge, researchers have demonstrated various approaches in which reliable uncertainty estimates can be obtained also for these models [11, 14-16], with more recent efforts focusing on making UQ cheap and efficient for such NN-based models [7,8,12,13]. The on-the-fly ML uncertainty estimates can also be leveraged to construct robust datasets for model training via active learning strategies [9].
Our workshop aims to bring together researchers focused on UQ in both domains of DFT and atomistic ML, allowing the respective research communities to take a collective first step towards the "holy grail" of UQ in atomistic modeling, which would be to propose a comprehensive approach that links the errors of the DFT calculations to those stemming from the statistical inference of ML models. We invite researchers from both communities and beyond to join us and share the latest developments in the loosely defined areas of (i) UQ in DFT, (ii) UQ in atomistic ML, and (iii) applications of UQ in atomistic modeling, and partake in stimulating discussions that will shape the future of UQ in atomistic modeling.
Note that we encourage all prospective participants to present their research at our workshop, either as a poster or a contributed talk. Please see the detailed application instructions for more information.
References
Genevieve Dusson (CNRS & Université Bourgogne Franche-Comté) - Organiser
Germany
Julia Maria Westermayr (Leipzig University) - Organiser
Italy
Federico Grasselli (University of Modena and Reggio Emilia) - Organiser
Switzerland
Sanggyu Chong (EPFL) - Organiser
Michael Herbst (EPFL) - Organiser