Registered User Area
Username

Password


International Guest Student Programme on Scientific Computing

August 6, 2012 to October 12, 2012

Location : Forschungszentrum Jülich, Germany

Details
Participants
 

Organisers

  • Godehard Sutmann (Research Center Jülich, Germany)
  • Mathias Winkel (Research Center Jülich, Germany)
  • Johannes Grotendorst (Research Center Jülich, Germany)

Supports

   CECAM

Juelich Supercomputing Centre / Institute for Advanced Simulation

Description

 

The field of scientific computing is becoming more
and more important in the educational plan of universities. In
practice this involves teaching of numerical methods,
implementation of algorithms and programming practice. In order
to work on modern computational challenges and to compete with other
groups on an international level, students should learn about
parallel programming in an early stage of their career, i.e. before
entering master or PhD level. Learning about the possibilities and
capabilities of parallel computing in this stage provides a wide
perspective to students and opens interesting paths to choose the way
in their scientific career. Therefore it is necessary to teach
students not only theoretically on parallel computing but to provide
access to modern HPC architectures and give them the possibility to obtain 
experience in practical parallel programming. This also leads to
young academics who will be well prepared to work on a thesis in
compute intense fields.
There are two main directions in parallel computing, which both have
their pro and cons. The one is based on a distributed memory model,
which needs explicit communication between processes in order to share
data. The standard approach is the use of MPI (Message Passing
Interface)[1]. The other is based on a shared memory model, where
processes can access data of other processes via a global address
space. The standard approach for this programming model is OpenMP
[2]. The latter approach, however, is only suited for multi-core
architectures, where cores have access to the node's
memory. Therefore, this approach is usually adopted to parallelism
with small number of cores (O(10)). Both approaches are sometimes
combined into a hybrid programming model, where OpenMP is used on the
nodes and MPI between the nodes. Depending on the application this
sometimes increases the range of scalability. 
There are also extensions of parallel languages and interfaces in view
of new hardware architectures. E.g. the standard for GPGPU
architectures is CUDA, for Cell/BE it is CellSs, although there are
generalizations in work, i.e. OpenCL [6] and StarSs [7], which will be
available for different architectures and might evolve to new
standards.

The field of scientific computing is becoming moreand more important in the educational plan of universities. In practice this involves teaching of numerical methods, implementation of algorithms and programming practice. In order to work on modern computational challenges and to compete with other groups on an international level, students should learn about parallel programming in an early stage of their career, i.e. before entering master or PhD level.

Learning about the possibilities andc apabilities of parallel computing in this stage provides a wide perspective to students and opens interesting paths to choose the way in their scientific career. Therefore it is necessary to  teach students not only theoretically on parallel computing but to provide access to modern HPC architectures and give them the possibility to obtain experience in practical parallel programming. This also leads toyoung academics who will be well prepared to work on a thesis incompute intense fields.


There are two main directions in parallel computing, which both havetheir pro and cons. The one is based on a distributed memory model,which needs explicit communication between processes in order to sharedata. The standard approach is the use of MPI (Message PassingInterface)[1]. The other is based on a shared memory model, whereprocesses can access data of other processes via a global addressspace. The standard approach for this programming model is OpenMP[2]. The latter approach, however, is only suited for multi-corearchitectures, where cores have access to the node'smemory. Therefore, this approach is usually adopted to parallelismwith small number of cores (O(10)). Both approaches are sometimescombined into a hybrid programming model, where OpenMP is used on thenodes and MPI between the nodes. Depending on the application thissometimes increases the range of scalability. 
There are also extensions of parallel languages and interfaces in viewof new hardware architectures. E.g. the standard for GPGPUarchitectures is CUDA, for Cell/BE it is CellSs, although there aregeneralizations in work, i.e. OpenCL [6] and StarSs [7], which will beavailable for different architectures and might evolve to newstandards.

The programme starts with an introductory course concerning the techniques of parallel computing and the use of Juelich supercomputers (Jugene - IBM Blue Gene/P and Juropa - Bull/Nehalem cluster). The course consists of lectures and practical hands-on sessions. Each student is then assigned to a scientific core group where he is working on a project within the context of ongoing research interests of this group. In the ideal case, the student already mentions in his application in which group he would like to work. During the programme the student is then supervised by a researcher. In the end of the programme a 2-day seminar is organized where the students give a presentation of about 30 minutes each about their work. In addition each student prepares a final report (~10 pages each) which is published as techincal report [8-18]. 

References

[1] Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker, Jack Dongarra:
MPI. The Complete Reference Vol. 1: The MPI core. 2nd edition, MIT
Press (1998).
[2] William Gropp, Steven Huss-Lederman, Andrew Lumsdain, Ewing Lusk, Bill Nitzberg, William Saphir,
Marc Snir, MPI: The Complete Reference. Vol 2: The MPI-2 extensions.
MIT Press (1998).
[3] Full description of MPI-2 may be found at: http://www.mpi-forum.org/docs/mpi-2.2/mpi22-report.pdf
[4] R. van der Pas, B. Chapman, G. Jost: "Using OpenMP", MIT
Press. ISBN-10: 0-262-53302-2 / ISBN-13: 978-0-262-53302-7.
[5] R. Chandra, L. Dagum, D. Kohr, D. Maydan, J. McDonald, R.Menon:
"Parallel Programming in OpenMP", Morgan Kaufmann Publishers, 2001
[6] John E. Stone, David Gohara, Guochun Shi, "OpenCL: A Parallel
Programming Standard for Heterogeneous Computing Systems," Computing
in Science and Engineering, vol. 12, no. 3, pp. 66-73, May/June 2010
[7] J. Planas, R.M. Badia, E. Ayguaded and J.Labarta, Hierarchical
Task-Based Programming With StarSs, Int. J. High
Perform. Comput. Appl., Vol.23, pp. 284-299 (2009).
[8] Contributions to Scientific Computing - Gueststudent Program 2000
of the John von Neumann Institute for Computing, R.Esser, D.Mallmann
(Edts.), Technical Report IB-2000-15 (http://www2.fz-juelich.de/jsc/files/docs/ib/ib-00/ib-2000-15.pdf)
[9] Contributions to Scientific Computing - Gueststudent Program 2001
of the John von Neumann Institute for Computing, R. Esser
(Edt.), Technical Report IB-2001-12 (http://www2.fz-juelich.de/jsc/files/docs/ib/ib-01/ib-2001-12.pdf)
[10] Contributions to Scientific Computing - Gueststudent Program 2002
of the John von Neumann Institute for Computing, R. Esser
(Edt.), Technical Report IB-2002-12 (http://www2.fz-juelich.de/jsc/files/docs/ib/ib-02/ib-2002-12.pdf)
[11] Contributions to Scientific Computing - Gueststudent Program 2003
of the John von Neumann Institute for Computing, R. Esser
(Edt.), Technical Report IB-2003-10 (http://www2.fz-juelich.de/jsc/files/docs/ib/ib-03/ib-2003-10.pdf)
[12] Contributions to Scientific Computing - Gueststudent Program 2004
of the John von Neumann Institute for Computing, R. Esser
(Edt.), Technical Report IB-2004-11 (http://www2.fz-juelich.de/jsc/files/docs/ib/ib-04/ib-2004-11.pdf)
[13] Contributions to Scientific Computing - Gueststudent Program 2005
of the John von Neumann Institute for Computing, R. Esser
(Edt.), Technical Report IB-2005-13 (http://www2.fz-juelich.de/jsc/files/docs/ib/ib-05/ib-2005-13.pdf)
[14] Contributions to Scientific Computing - Gueststudent Program 2006
of the John von Neumann Institute for Computing, R. Esser
(Edt.), Technical Report IB-2006-14 (http://www2.fz-juelich.de/jsc/files/docs/ib/ib-06/ib-2006-14.pdf)
[15] Contributions to Scientific Computing - Gueststudent Program 2007
of the John von Neumann Institute for Computing, R. Esser
(Edt.), Technical Report IB-2007-12 (http://www2.fz-juelich.de/jsc/files/docs/ib/ib-07/ib-2007-12.pdf)
[16] Contributions to Scientific Computing - Gueststudent Program 2008
of the John von Neumann Institute for Computing, M. Bolten
(Edt.), Technical Report IB-2008-07 (http://www2.fz-juelich.de/jsc/files/docs/ib/ib-08/ib-2008-07.pdf)
[17] 2009 Proceedings of the JSC Guest Student Programme on Scientific
Computing, R. Speck
(Edt.), Technical Report IB-2009-04 (http://www2.fz-juelich.de/jsc/files/docs/ib/ib-09/ib-2009-04.pdf)
[18] 2010 Proceedings of the JSC Guest Student Programme on Scientific
Computing, M. Winkel and R. Speck
(Edt.), Technical Report IB-2010-04 (http://www2.fz-juelich.de/jsc/files/docs/ib/ib-10/ib-2010-04.pdf)


CECAM - Centre Européen de Calcul Atomique et Moléculaire
Ecole Polytechnique Fédérale de Lausanne, Batochime (BCH), 1015 Lausanne, Switzerland