*arrow_back*Back

## Extended Software Development Workshop: Mesoscopic simulation models and High-Performance Computing

#### Location: CECAM-FI

#### Organisers

In Discrete Element Methods the equation of motion of large number of particles is numerically integrated to obtain the trajectory of each particle [1]. The collective movement of the particles very often provides the system with unpredictable complex dynamics inaccessible via any mean field approach. Such phenomenology is present for instance in a seemingly simple systems such as the hopper/silo, where intermittent flow accompanied with random clogging occurs [2]. With the development of computing power alongside that of the numerical algorithms it has become possible to simulate such scenarios involving the trajectories of millions of spherical particles for a limited simulation time. Incorporating more complex particle shapes [3] or the influence of the interstitial medium [4] rapidly decrease the accessible range of the number of particles.

Another class of computer simulations having a huge popularity among the science and engineering community is the Computational Fluid Dynamics (CFD). A tractable method for performing such simulations is the family of Lattice Boltzmann Methods (LBMs) [5]. There, instead of directly solving the strongly non-linear Navier-Stokes equations, the discrete Boltzmann equation is solved to simulate the flow of Newtonian or non-Newtonian fluids with the appropriate collision models [6,7]. The method resembles a lot the DEMs as it simulates the the streaming and collision processes across a limited number of intrinsic particles, which evince viscous flow applicable across the greater mass.

As both of the methods have gained popularity in solving engineering problems, and scientists have become more aware of finite size effects, the size and time requirements to simulate practically relevant systems using these methods have escaped beyond the capabilities of even the most modern CPUs [8,9]. Massive parallelization is thus becoming a necessity. This is naturally offered by graphics processing units (GPUs) making them an attractive alternative for running these simulations, which consist of a large number of relatively simple mathematical operations readily implemented in a GPU [8,9].

The objectives of the workshop are to extend the awareness of the possibilities of using GPU architectures in high-throughput scientific computing specifically discrete element based simulations, and to assist the participants to efficiently capitalize these resources. This will be achieved by a combination of extensive lectures on GPU programming, and practical learning-by-doing approach, where the participants work on their own projects under the guidance of experts in the field.

## References

**Finland**

Mikko Alava (Aalto University, NOMATEN CoE (Poland)) - Organiser

Jan Astrom (CSC It center for science) - Organiser

Antti Puisto (Aalto University) - Organiser

**Netherlands**

Brian Tighe (TU Delft) - Organiser