Sustainable Computing with Neuromorphic and Quantum-Inspired Technologies
Location: Lorentz Center, The Netherlands
Organisers
Useful information
To register, please use the Lorentz Center website: https://www.lorentzcenter.nl/sustainable-computing-with-neuromorphic-and-quantum-inspired-technologies.html
Description
Solving the grand computational challenges relies on high-performance and high-throughput computing. However, the energy cost of computing systems increases tremendously. Further increase of computing power is fundamentally limited by the multimillion-megawatt electrical power available from (nuclear) power plants, which is clearly unsustainable. This workshop focuses on new neuromorphic [Mehonic2022, Kudithipudi2025] and quantum-inspired [Frank2020,Aadit2022] computing paradigms to assess their potential for realizing a much more sustainable heterogeneous future of computational science. We bring together (startup) companies in neuromorphic and quantum-inspired paradigms and link them with computational science users in molecular simulation, astronomy and particle physics, as well as with computing science and ethics experts to jointly develop a roadmap for realizing a neuromorphic and quantum-inspired computing ecosystem capable of solving next-generation computational challenges at much reduced energy and carbon footprint.
Computational and data science challenges
Computational physics and chemistry fundamentally deal with numerically solving many-particle problems. Existing methods scale at best polynomial with the system size, limiting the accessible system sizes that can be simulated from first principles, hindering knowledge development on atomistic understanding of multi-scale phenomena. Likewise, high-resolution simulations within continuum models suffer from scaling limitations. This hampers knowledge development in for example, electrochemistry in batteries, nuclear fusion in a reactor as well as climate modelling [Hazeleger2024].
Radio astronomy and particle physics are data-intensive sciences that exist by virtue of abundant and cheap compute resources. Today's distributed aperture synthesis radio telescopes comprise complex hierarchical systems of systems, reducing the data volumes step by step based on a combination of custom-designed and general-purpose hardware. Although accessible today, next generation instrumentation (High-Luminosity Large Hydron Collider, Square Kilometer Array) operates at scales for which no or no practically affordable hardware solutions exist. This strongly constrains the amount of new knowledge that can be obtained with upcoming advanced instrumentation.
New Neuromorphic and Quantum-Inspired solutions
Neuromorphic and other quantum-inspired computing hardware offers enormous potential as accelerators for such highly demanding tasks. The advantage stems from the co-location of memory and processing. This strongly reduces the amount of data transfer between processing and memory and offers massive parallelism, which can improve energy-efficiency by many orders of magnitude. Moreover, neuromorphic hardware architectures can implement specific calculations inherently in parallel. Therefore, deployment of existing computational algorithms on new hardware effectively reduces the time complexity. For example, a Monte Carlo sweep can reduce from O(M) to O(1), where M is the number of states in the sweep and matrix-vector multiplication (MVM) from O(N2) to O(1), with N the dimension of the vector. This fundamental scaling advantage can lead to the breaking of existing computational barriers [Kosters2023].
Although this potential for scaling advantage has been known, large scale application, i.e. on the scale of millions of computing elements, relies on technological advantages that have become possible only recently. Aspects of hardware and software co-design have also become important as novel programming models and primitives will allow to exploit the computational resources of hardware and scale. Specifically, this workshop will cover the following scalable hardware paradigms. Between square brackets we indicate the companies involved for each technology:
Analog and Digital in-memory computing (IMC) [IBM, AxeleraAI]
IMC hardware is based on crossbar arrays with stationary weights, minimizing data transfer and realizing the O(N2) to O(1) time complexity advantage for MVM. Both Analog IMC based on memristive devices [Lanza2022] and Digital IMC are being considered. Each of them comprises over millions of memory elements and can reach beyond 100s TOPS (effective Tera Operations Per Second).
Probabilistic and Reversible computing [InfinityQ, Vaire]
Inspired from the mathematics of analog quantum computing for solving NP-Hard combinatorial optimization problems, probabilistic computing hardware stochastically explores an exponentially large space of probable solutions and can return a family of possible solutions to a given problem. Another class of hardware accelerators are based on reversible computing which is primarily applied for implementing digital logic and matrix multiplications in a reversible manner allowing for near zero energy consumption. Unlike classical computing that generates heat due to performing irreversible operations, the next generation of AI accelerators based on probabilistic and reversible computing can perform near zero computations for low power embedded systems [Frank2020]. Current probabilistic systems based on GPUs and FPGAs can access models with over one billion parameters and input spaces of one million variables [Aadit2022].
Spiking Hardware [Intel, Innatera]
Spiking hardware relies on doing computations using spikes, inspired by the spiking nature of neurons in the human brain. Spiking hardware can be based on fully digital hardware, as well as on analog devices and/or mixed analog/digital. Large scale spiking hardware has become available, comprising over 100 trillion synaptic operations per second, equivalent to the scale of the human brain.
Integration, programming models and profiling challenges
Fundamentally, the embedding problem of how to optimally map a problem or algorithm into the hardware is a challenge and requires in depth analysis to investigate the tradeoff of computational resources vs algorithm performance and hardware energy efficiency [Laydevant2024]. Moreover, neuromorphic and quantum hardware typically act as accelerators that must be integrated in the total computing system, making it heterogenous. Efficiently using existing combined CPU and GPU systems is already hard [Heldens2020, Heldens2024, Dreuning2024], which becomes even more challenging for heterogenous systems, which each come with their own specific (training) algorithms. To cope with these challenges, intermediate and hardware-agnostic representations need to be developed [Pedersen 2024]. Moreover, sufficiently general yet specific computing primitives need to be identified and benchmarked for which energy and performance profiling is needed [Corbalan2020].
Ethical challenges
Typical computational science users are not very aware of the environmental impact of their research. Even if they were aware, they feel not able to reduce that impact. A key observation is therefore that single researchers cannot be directly held responsible for the environmental impact of their research. This needs to be mandated from institute-level upwards. Effective measures, such as key performance indicators need to change from pure scientific impact to science per consumed resource or resource equivalent. Likewise, processing could be seen as a limited resource that scientists must use as effectively as possible.
Interdisciplinary Approach
Taking up the challenges towards sustainable computing requires an interdisciplinary approach, featuring expertise in computational use cases, neuromorphic and quantum-inspired hardware as well as algorithmic integration and performance profiling. Together with ethical experts, new performance metrics and a roadmap must be defined to stimulate a transition to sustainable computing capable of solving the grand computational challenges that are not solvable today.
References
Chris Broekema (ASTRON) - Organiser
Johan Mentink (Radboud University) - Organiser
Aida Todri-Sanial (TUE) - Organiser
Rob van Nieuwpoort (Leiden University) - Organiser
Spain
Julita Corbalan (Energy Aware Solutions (EAS)) - Organiser