AI for Materials Science: Mining and Learning Interpretable, Explainable, and Generalizable Models from Data
The workshop will be held virtually, via zoom or equivalent software
Thanks to projects like the Materials Project , AFLOW , and the NOMAD Laboratory , and available public data at the institutes, a large amount of high-quality computational materials-science data are now available to the scientific community. This represents a unique opportunity for uncovering the underlying patterns and structure in the data and thus enable the development of structure-property relationships in materials science data.
Big-data driven materials science - the so-called fourth paradigm in materials science - recognizes that it may not be possible, or not appropriate, to describe many properties of functional materials by a single, physically founded model [4-6] The reason is that such properties are determined by several multi-level, intricate theoretical concepts. Thus, insight is obtained by searching for structure and patterns in the data, which arise from functional relationships among different processes and functions. Finding these relationships could lead, for example, to the discovery of efficient catalysts for methane formation, good thermal-barrier coatings, shape memory alloys, certain quantum-states of matter (e.g. topological insulators), thermoelectric materials or new perovskites for energy harvesting.
Given that artificial intelligence (AI), and in particular deep learning is revolutionizing many areas (e.g. speech recognition, image classification, natural-language generation), it is reasonable to believe that AI could play a significant role in the development of data-driven materials science. For instance, deep learning has been recently successfully applied to a large variety of materials science problems[7-12]. In general, the origin of the deep learning success can be traced back to its ability to automatically construct representations which capture important aspects of the data, while discarding inessential details. However, it is challenging to specify further this high-level perspective and provide a detailed explanation on how the model arrives at its prediction. Thus, usually we are left with a black-box model, which from a scientific standpoint is not fully satisfactory, and that we would wish to open and understand.
Even though numerous methods have been recently proposed to interpret AI models (e.g., [12-18]), somewhat surprisingly, interpretability in AI is far from being a consensual concept, with diverse and sometimes contrasting motivations for it . If we turn the attention to AI for materials science, the situation is even less clear.
Reasonable candidate properties of interpretable models could be model transparency (i.e. how does the model work?) and post hoc explanations (i.e., what else can the model tell me?).
In the materials science/physics domain, transparency could be demonstrated by justifying the model from first principles, for example by quantum-mechanical calculations or an analytical model valid for some regime, and or by molecular dynamics and statistical-mechanics analysis. In addition, post hoc explanations could be used to show that the AI model is predictive well beyond training data; these new data points could be suggested by inspection of the analytical model or physical considerations.
From an intuitive perspective, interpretability is - at least in natural sciences - intrinsically linked with generalizability, although this link is yet to be defined.
 C. Draxl and M. Scheffler, NOMAD: The FAIR Concept for Big-Data-Driven Materials Science. Invited Review for MRS Bulletin 43, 676-682 (2018).
 C. Draxl and M. Scheffler, Big-Data-Driven Materials Science and its FAIR Data Infrastructure. Plenary Chapter in Handbook of Materials Modeling (eds. S. Yip and W. Andreoni), Springer (2019).
 C. Draxl and M. Scheffler, The NOMAD Laboratory: From Data Sharing to Artificial Intelligence. J. Phys. Mater. 2, 036001 (2019).
 Ghasemi SA, Hofstetter A, Saha S, Goedecker S. Interatomic potentials for ionic systems with density functional accuracy based on charge densities obtained by a neural network, Phys. Rev. B 92, 045131 (2015)
 Morawietz T, Singraber A, Dellago C, Behler J. How van der Waals interactions determine the unique properties of water, PNAS 113 (30) 8368-8373 (2016)
 Schütt KT, Arbabzadah F, Chmiela S, Müller KR, Tkatchenko A. Quantum-chemical insights from deep tensor neural networks, Nat. Comm. 8, 13890 (2017)
 Ryan K, Lengyel J, Shatruk M. Crystal Structure Prediction via Deep Learning, J. Am. Chem. Soc. 2018, 140, 32, 10158–10168
 T. Xie and J. C. Grossman, Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties, Phys. Rev. Lett. 120, 145301 (2018)
 A. Ziletti, D. Kumar, M. Scheffler, and L.M. Ghiringhelli, Insightful classification of crystal structures using deep learning. Nat. Commun. 9, 2775 (2018).
 Bach S, Binder A, Montavon G, Klauschen F, Müller KR, Samek W. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation, PLOS ONE 7 (10), e0130140 (2015)
 Ribeiro MT, Singh S, Guestrin C. “Why Should I Trust You?” Explaining the Predictions of Any Classifier, in "Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining", ACM, pp. 1135-1144 (2016). arXiv:1602.04938
 S. Lundberg and S. Lee, A Unified Approach to Interpreting Model Predictions, in "Advances in Neural Information Processing Systems 2017", pp. 4765-4774. arXiv: 1705.07874 (2017)
 Kumar D, Wong A, Taylor GW. Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks, IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 1686 (2017)
 Montavon G, Samek W, Müller KR. Methods for interpreting and understanding deep neural networks, Digital Signal Processing (73), 1 (2018)
 L. Akoglu, H. Tong, J. Vreeken, C. Faloutsos. Fast and Reliable Anomaly Detection in Categorical Data
 Z. C. Lipton, The Mythos of Model Interpretability, ICML Workshop on Human Interpretability in Machine Learning. Communications of the ACM 61, 36-43 (2016), arXiv:1606.03490
Luca Ghiringhelli ( Fritz Haber Institute of the Max Planck Society ) - Organiser
Matthias Scheffler ( Fritz-Haber-Institut der Max-Planck-Gesellschaft ) - Organiser
Jilles Vreeken ( Exploratory Data Analysis, Cluster of Excellence MMCI, Saarland University ) - Organiser