International Guest Student Programme on Scientific Computing
Location: Forschungszentrum Jülich, Germany
Organisers
The field of scientific computing is becoming more and more important in the educational plan of universities. In practice this involves teaching of numerical methods, implementation of algorithms and programming practice. In order to work on modern computational challenges and to compete with other groups on an international level, students should learn about parallel programming in an early stage of their career, i.e. before entering master or PhD level. Learning about the possibilities and capabilities of parallel computing in this stage provides a wide perspective to students and opens interesting paths to choose the way in their scientific career. Therefore it is necessary to teach students not only theoretically on parallel computing but to provide access to modern HPC architectures and give them the possibility to obtain experience in practical parallel programming. This also leads to young academics who will be well prepared to work on a thesis in compute intense fields.
There are two main directions in parallel computing, which both have their pro and cons. The one is based on a distributed memory model, which needs explicit communication between processes in order to share data. The standard approach is the use of MPI (Message Passing Interface)[1]. The other is based on a shared memory model, where processes can access data of other processes via a global address space. The standard approach for this programming model is OpenMP [2]. The latter approach, however, is only suited for multi-core architectures, where cores have access to the node's memory. Therefore, this approach is usually adopted to parallelism with small number of cores (O(10)). Both approaches are sometimes combined into a hybrid programming model, where OpenMP is used on the nodes and MPI between the nodes. Depending on the application this sometimes increases the range of scalability.
There are also extensions of parallel languages and interfaces in view of new hardware architectures. E.g. the standard for GPGPU architectures is CUDA, for Cell/BE it is CellSs, although there are generalizations in work, i.e. OpenCL [6] and StarSs [7], which will be available for different architectures and might evolve to new standards.
The field of scientific computing is becoming moreand more important in the educational plan of universities. In practice this involves teaching of numerical methods, implementation of algorithms and programming practice. In order to work on modern computational challenges and to compete with other groups on an international level, students should learn about parallel programming in an early stage of their career, i.e. before entering master or PhD level.
Learning about the possibilities andc apabilities of parallel computing in this stage provides a wide perspective to students and opens interesting paths to choose the way in their scientific career. Therefore it is necessary to teach students not only theoretically on parallel computing but to provide access to modern HPC architectures and give them the possibility to obtain experience in practical parallel programming. This also leads toyoung academics who will be well prepared to work on a thesis incompute intense fields.
There are two main directions in parallel computing, which both havetheir pro and cons. The one is based on a distributed memory model,which needs explicit communication between processes in order to sharedata. The standard approach is the use of MPI (Message PassingInterface)[1]. The other is based on a shared memory model, whereprocesses can access data of other processes via a global addressspace. The standard approach for this programming model is OpenMP[2]. The latter approach, however, is only suited for multi-corearchitectures, where cores have access to the node'smemory. Therefore, this approach is usually adopted to parallelismwith small number of cores (O(10)). Both approaches are sometimescombined into a hybrid programming model, where OpenMP is used on thenodes and MPI between the nodes. Depending on the application thissometimes increases the range of scalability.
There are also extensions of parallel languages and interfaces in viewof new hardware architectures. E.g. the standard for GPGPUarchitectures is CUDA, for Cell/BE it is CellSs, although there aregeneralizations in work, i.e. OpenCL [6] and StarSs [7], which will beavailable for different architectures and might evolve to newstandards.
The programme starts with an introductory course concerning the techniques of parallel computing and the use of Juelich supercomputers (Jugene - IBM Blue Gene/P and Juropa - Bull/Nehalem cluster). The course consists of lectures and practical hands-on sessions. Each student is then assigned to a scientific core group where he is working on a project within the context of ongoing research interests of this group. In the ideal case, the student already mentions in his application in which group he would like to work. During the programme the student is then supervised by a researcher. In the end of the programme a 2-day seminar is organized where the students give a presentation of about 30 minutes each about their work. In addition each student prepares a final report (~10 pages each) which is published as techincal report [8-18].
References
Johannes Grotendorst (Research Center Jülich) - Organiser
Godehard Sutmann (Forschungszentrum Juelich) - Organiser
Mathias Winkel (Research Center Jülich) - Organiser