Perspectives and challenges of future HPC installations for atomistic and molecular simulations
Location: Zuse Institute Berlin, Germany
Organisers
Links: Workshop poster and Scientific programme (talks and posters)
Online participation will be possible via Zoom Meetings.
Registered participants, please use the link in the Documents tab, which appears if you are logged in.
Abstract submission until: 10 January 2024
Registration deadline: 31 January 2024
An interdisciplinary platform for researchers, software engineers, and hardware specialists
Topics
- Highlights in classical molecular simulations
- Advances in quantum mechanical modelling
- Hybrid QM/MM methods and their applications
- Algorithms and co-design for novel hardware
- Parallel programming models
- Technological trends & new computing paradigms
Invited speakers
- Rommie Amaro (UC San Diego) - online
- Christian Carbogno (FHI Berlin)
- Tom Deakin (Univ. Bristol)
- Michael Hennecke (Intel Germany)
- Martin Herbordt (Boston Univ.)
- Hans-Christian Hoppe (FZ Jülich)
- Michael Klemm (AMD/OpenMP ARB)
- Hector Martinez-Seara (IOCB CAS Prague)
- Dmitry Morozov (Univ. Jyväskylä)
- Giulia Palermo (UC Riverside)
- James C. Phillips (Univ. Illinois) - online
- Sereina Riniker (ETH Zürich)
- Michèle Weiland (EPCC Edinburgh)
Rationale
In the past decade, progress in massively parallel processor hardware has led to a tremendous increase in computational resources. A substantial thrust in this development was due to novel avenues in data science and machine learning applications. But also traditional fields in scientific computing have already benefitted from these advances and can gain further advantage. The computational research on atomistic and molecular systems has seen a number of hallmarks that would not have been possible without the use of top-scale HPC facilities. Just to name a few, the realistic simulation of biological cell membranes has become into reach [1], and highly accurate protein structure can be predicted by combining extensive molecular simulations and deep learning [2, 3]. In materials research, first-principles simulations have proven to yield highly accurate and reproducible predictions [4], paving the way for data-driven materials discovery [5]. One can also include the photon–electron coupling and predict light-induced changes of material properties [6]. The integration of in situ electron structure calculations into atomistic simulations allows for hybrid quantum mechanics–molecular mechanics (QM/MM) calculations [7, 8], generating insight with unprecedented detail into the dynamics of photoreceptors [9] or enzymes [10].
In molecular dynamics, many existing packages have been adapted to support GPU accelerators as an option and a few packages specifically targeting GPUs have been developed from scratch. Nevertheless, a considerable workload in HPC centres is still based on the conventional paradigm of inter-process communication. Whereas the transition to GPUs has been relatively straightforward in the case of classical MD, this is not so for first-principles DFT calculations and it may be possible only partially [11].
During the past 10 to 15 years, the HPC hardware landscape has become diverse, heterogeneous, and hierarchical. Some kind of massively parallel accelerators, e.g., high-end graphics processors (GPUs), have become available in most computing centres, or will be included in their next upgrade. Upcoming exascale systems are and will be based on GPUs, and for such systems, the seemless integration of accelerators and CPUs is a necessity. However, there are not only GPUs (Nvidia, AMD, Intel), but some vendors (Nvidia, Intel, ARM) are developing massively parallel “co-processors” that are closely attached to the CPU cores. Striving for energy efficiency, a critical factor in extreme-scale computing, alternatives to the x86 platform such as ARM processors are actively investigated. From the persepective of industry, two of the largest commercial computational fluid dynamics (CFD) tools (Ansys, Siemens) were launched with support for GPU acceleration, but only as late as end of 2021.
Along with the elevated heterogeneity in the processor landscape, the memory hierarchy has become deeper, an aspect that previously mostly processor designers paid attention to. One example is non-volatile memory (NVRAM, e.g., Intel’s Optane), which enables the implementation of distributed asynchronous object store (DAOS) that are expected to boost data intensive workloads and high-throughput methods. An alternative scheme for HPC computing is renting resources in the cloud rather than using on-premises facilities, which fuels the demand to support software containers and Kybernetes clusters; the latter is also desirable from the perspective of reproducibility of research data. Further innovative approaches to HPC include reconfigurable devices (e.g., FPGAs) [12], mixed-precision floating-point arithmetics [13], and fault-tolerant algorithms in view of scalability. Last but not least, there have been serious advances in quantum computing with prospects for molecular sciences [14, 15].
The goal of the workshop is to provide a platform for scientific researchers, software engineers, and hardware specialists to exchange about recent successes and demands for the future in the field of atomistic and molecular simulations using high-performance computing (HPC). There is a need to communicate
early technological innovations to scientific researchers so that software for simulations and data analysis is readily available at production quality when the corresponding high-performance computing (HPC) installations start their operation. In turn, computer scientists and hardware specialists need to know
about the actual computational requirements and workflows of current simulation-based research.
Central objectives and questions of the workshop are:
- What actions are needed and have already been taken to get ready for the next-generation supercomputers, which will be pre-exascale or exascale systems?
What needs to be done in software and algorithms to reduce the widening gap between theoretical performance and actual performance? - Which problems are at the frontiers of HPC-based atomistic simulations that can be solved today?
Which important questions could be answered with more computing power? - What are new potential innovations shining at the horizon? For example, how can data intensive workloads exploit distributed asynchronous object stores (DAOS)?
- Which role can quantum computers play in future atomistic simulations? What are the fundamental, theoretical limitations of such approaches?
In addition to that, the event should also help scientist participants to understand the practical limitations of performance and scalability not only in general, but for their specific applications and to learn about possible ways and tools to overcome or push these limits.
References
Felix Höfling (Freie Universität Berlin) - Organiser
Petra Imhof (Friedrich-Alexander Universität Erlangen-Nürnberg (FAU)) - Organiser
Thomas Kühne (University of Paderborn) - Organiser