AI for Materials Science: Mining and Learning Interpretable, Explainable, and Generalizable Models from Data
Location: Online event
Organisers
Thanks to projects like the Materials Project [1], AFLOW [2], and the NOMAD Laboratory [3], and available public data at the institutes, a large amount of high-quality computational materials-science data are now available to the scientific community. This represents a unique opportunity for uncovering the underlying patterns and structure in the data and thus enable the development of structure-property relationships in materials science data.
Big-data driven materials science - the so-called fourth paradigm in materials science - recognizes that it may not be possible, or not appropriate, to describe many properties of functional materials by a single, physically founded model [4-6] The reason is that such properties are determined by several multi-level, intricate theoretical concepts. Thus, insight is obtained by searching for structure and patterns in the data, which arise from functional relationships among different processes and functions. Finding these relationships could lead, for example, to the discovery of efficient catalysts for methane formation, good thermal-barrier coatings, shape memory alloys, certain quantum-states of matter (e.g. topological insulators), thermoelectric materials or new perovskites for energy harvesting.
Given that artificial intelligence (AI), and in particular deep learning is revolutionizing many areas (e.g. speech recognition, image classification, natural-language generation), it is reasonable to believe that AI could play a significant role in the development of data-driven materials science. For instance, deep learning has been recently successfully applied to a large variety of materials science problems[7-12]. In general, the origin of success of deep learning can be ascribed to its ability to automatically construct representations that capture the important aspects of the data, while discarding inessential details. However, it is challenging to specify further this high-level perspective and provide a detailed explanation on how the model arrives at its prediction. Thus, we are usually left with a black-box model, which from a scientific standpoint is not fully satisfactory, and that we would wish to open and understand.
Although numerous methods have been recently proposed to interpret AI models (e.g., [12-18]), interpretability in AI is far from being a consensual concept, with diverse and sometimes contrasting motivations for it [19]. If we turn the attention to AI for materials science, the situation is even less clear.
Reasonable candidate properties of interpretable models could be model transparency (i.e., how does the model work?) and post hoc explanations (i.e., what else can the model tell me?).
In the materials science/physics domain, transparency could be demonstrated by justifying the model from first principles, for example by quantum-mechanical calculations or an analytical model valid for some regime, and or by molecular dynamics and statistical-mechanics analysis. In addition, post hoc explanations could be used to show that the AI model is predictive well beyond training data; these new data points could be suggested by inspection of the analytical model or physical considerations.
From an intuitive perspective, interpretability is - at least in natural sciences - intrinsically linked with generalizability, although this link is yet to be defined.
The workshop will take place in a virtual form (via the zoom platform) in three nonconsecutive days: Wed June 9, Thu June 17, and Wed Thu 23. At all dates, the time will be between 3:00 and 5:30 PM CEST. Each day will be structured into 3-4 invited talks given by experts in AI applied to materials science or experts in AI with a computer-science / applied-mathematics background. Each talk will end with a Q&A session, where participants who are not speakers may participate via (written) chat.
[1] http://materialsproject.org
[2] http://aflowlib.org
[3] https://nomad-coe.eu
[4] C. Draxl and M. Scheffler, NOMAD: The FAIR Concept for Big-Data-Driven Materials Science. Invited Review for MRS Bulletin 43, 676-682 (2018).
[5] C. Draxl and M. Scheffler, Big-Data-Driven Materials Science and its FAIR Data Infrastructure. Plenary Chapter in Handbook of Materials Modeling (eds. S. Yip and W. Andreoni), Springer (2019).
[6] C. Draxl and M. Scheffler, The NOMAD Laboratory: From Data Sharing to Artificial Intelligence. J. Phys. Mater. 2, 036001 (2019).
[7] Ghasemi SA, Hofstetter A, Saha S, Goedecker S. Interatomic potentials for ionic systems with density functional accuracy based on charge densities obtained by a neural network, Phys. Rev. B 92, 045131 (2015)
[8] Morawietz T, Singraber A, Dellago C, Behler J. How van der Waals interactions determine the unique properties of water, PNAS 113 (30) 8368-8373 (2016)
[9] Schütt KT, Arbabzadah F, Chmiela S, Müller KR, Tkatchenko A. Quantum-chemical insights from deep tensor neural networks, Nat. Comm. 8, 13890 (2017)
[10] Ryan K, Lengyel J, Shatruk M. Crystal Structure Prediction via Deep Learning, J. Am. Chem. Soc. 2018, 140, 32, 10158–10168
[11] T. Xie and J. C. Grossman, Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties, Phys. Rev. Lett. 120, 145301 (2018)
[12] A. Ziletti, D. Kumar, M. Scheffler, and L.M. Ghiringhelli, Insightful classification of crystal structures using deep learning. Nat. Commun. 9, 2775 (2018).
[13] Bach S, Binder A, Montavon G, Klauschen F, Müller KR, Samek W. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation, PLOS ONE 7 (10), e0130140 (2015)
[14] Ribeiro MT, Singh S, Guestrin C. “Why Should I Trust You?” Explaining the Predictions of Any Classifier, in "Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining", ACM, pp. 1135-1144 (2016). arXiv:1602.04938
[15] S. Lundberg and S. Lee, A Unified Approach to Interpreting Model Predictions, in "Advances in Neural Information Processing Systems 2017", pp. 4765-4774. arXiv: 1705.07874 (2017)
[16] Kumar D, Wong A, Taylor GW. Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks, IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 1686 (2017)
[17] Montavon G, Samek W, Müller KR. Methods for interpreting and understanding deep neural networks, Digital Signal Processing (73), 1 (2018)
[18] L. Akoglu, H. Tong, J. Vreeken, C. Faloutsos. Fast and Reliable Anomaly Detection in Categorical Data
[19] Z. C. Lipton, The Mythos of Model Interpretability, ICML Workshop on Human Interpretability in Machine Learning. Communications of the ACM 61, 36-43 (2016), arXiv:1606.03490
References
Luca Ghiringhelli (NOMAD Laboratory at the Fritz Haber Institute of the Max Planck Society and Humboldt University, Berlin) - Organiser
Matthias Scheffler (Fritz-Haber-Institut der Max-Planck-Gesellschaft) - Organiser
Jilles Vreeken (Exploratory Data Analysis, Cluster of Excellence MMCI, Saarland University) - Organiser