Atomistic and molecular simulations on massively parallel architectures
Location: Chimie ParisTech, Paris, France
Organisers
High Performance Computing (HPC) simulations have now been for many years a driving force for atomistic based materials simulation. Systems of growing size and complexity can be modeled on longer time scales, thus allowing for the simulation of more and more realistic molecular systems. This has made possible new insights into complex phenomena and lead to many relevant discoveries.
The researchers in the field play a major role in this quest for performance, by continuously developing new algorithms and programming concepts taking advantage of the power of current technology. Still, the optimal use of current and future technologies will require to aggregate knowledge between physics and chemistry, algorithms design and computer sciences.
In recent years, a trend towards using co-processors to speed-up scientific applications has become main stream. It is therefore important for scientists, and in particular scientists involved in code development, to master the tools and concepts necessary to evolving their code for these new architectures.
Two major factors speak in favour of an increased attention devoted to the parallel capacity of a given application:
- Today a large proportion of research is done on teraflop class personal computers and systems, and tomorrow such departmental systems will reach petaflop scale; they will then use co-processors which will be quite different from those available today; research will greatly benefit by making the codes capable of using this extreme parallelism
- HPC computer facilities heading towards exascale computers within a 5 year time-frame prepare the extreme parallelism with heterogeneous architectures, comprising a moderate amount of “fat” nodes and a large number of thinner nodes comprised of co-processors for the next generation of HPC hardware.
Indeed, the final report of the European Exascale Software Initiative (EESI) project has highlighted the need for education and tutorial efforts for bridging the vibrant community code field of research to the latest developments in high performance computing (see for example the final report of the work group on Fundamental Sciences, http://www.eesi-project.eu/media/download_gallery/EESI_D3.5_WG3.3-report_R2.0.pdf)
Most of today’s software will not scale up to millions of cores, and there are a number of reasons for this: the memory per core is decreasing, the memory hierarchy becomes more complex, the vector width is growing, programming models -largely based today on MPI/OpenMP- may not be the right models for the exascale: the application will have to exhibit some level of resiliency and fault tolerance mechanisms, maybe even power awareness.
Exposing students to basic and recent developments in those fields in the context of atom based simulations will meet the following goals:
i. learn to program efficiently advanced architectures
ii. communicate ideas about code abstraction for portability;
iii. improve software engineering skills;
We will present 4 codes QMC=CHEM(QMC), Abinit-BigDFT, POLARIS(MD) and CP2K. We plan to present one code per day. Each day the general scheme will be same, and is described below. These codes cover a wide range of algorithms used in microscopic simulations, and have been already adapted to take advantage of the most advanced co-processors.
Each day of the 5 days will be organized according to the following scheme:
Morning
-presentation of the physics addressed by each code (from high level quantum ab initio to coarse grain approach)
-focus on the algorithm specific to the physics model
-optimization techniques, programming model, rewriting of critical parts, to address the targeted co processor
Afternoon
-Practical exercise session with mini versions of the full code presented in this specific day in question of optimization of the code in question
-ad hoc session on tools and methodologies for bottle neck identification and performance optimization by the technical specialists (for example from the Exascale computing Research lab)
References
Michel Caffarel (University Paul Sabatier, Toulouse) - Organiser
Adamo Carlo (Chimie ParisTech, Paris) - Organiser
Thierry DEUTSCH (CEA) - Organiser
Michel Masella (CEA) - Organiser
Mathieu Salanne (Sorbonne Université) - Organiser
Marie-Christine Sawley (Intel, Versailles) - Organiser
Anthony Scemama (CNRS) - Organiser
Marc Torrent (Commissariat à l'Energie Atomique (CEA)) - Organiser