Numerical simulations of biomolecules such as proteins and nucleic acids (DNA and RNA) constitute a prominent research area, whose progress has been made possible by the impressive employment of massive computational resources with sophisticate parallel architectures. At the same time, these 'super-computers' have created the general impression that any significant advancement in the field requires such computational endeavors.
Super-computers are not always at our disposal: in most cases, their access is regulated by calls that may require even six months before computational time is effectively allocated. In many cases this is not acceptable, in particular when the results of a wet-lab experiment demand a quick and reliable insight from the modeling counterpart.
In addition, it is not always clear whether a massive number of degrees of freedom be necessary to describe a biological system.
It is true that biology is grounded on the atomistic details of chemical reactions, but all these glorious tiny details limit the size and time domain of our investigations. Do the mechanic properties of a virus depend crucially on all the atoms of each of its amino acid residues? Shall we always include millions of atoms in order to study any biological system that is amenable to a comparison with what is actually measured in a wet-lab experiment? Is it possible to study the interaction of DNA with ions and electric fields without resorting to an all-atoms model, thus limiting the size of the investigated sequence?
In this direction, there has been a recent outbreak of alternative numerical strategies (coarse-graining methods, adaptive resolution methods, simplified force fields, variational approaches) pursuing the goal of studying many different systems at varying levels of accuracy with limited computational resources. These strategies are not always intended for biomolecules: some tools were derived in hydrodynamics studies, others in polymer physics and/or soft matter, in general. However, it is important to establish a cross-fertilization between these fields and computational biophysics.
This is exactly the idea behind this workshop: to bring together prominent scientist in this exciting research area, with the aim to set up a collaborative effort towards the development of computational biophysics on a single desktop computer. Topics will include: large scale methods for polymers and biopolymers, protein structure and function, protein assemblies, drug design, protein-DNA interactions, RNA folding.
The systematic improvement of these tools may open the way to the possibility of investigating biomolecular systems providing fast and reliable answers to the experimental questions, overcoming the limits imposed by the high costs in the development and maintenance of big super-computers.
The field of atomistic MD simulations has witnessed constant improvements in the development of more efficient models and algorithms. However, the problems posed by biology require new paradigms for computer simulations: these include the development of theoretical and methodological concepts, based on statistical mechanics. In this workshop, we propose to discuss the major issues related to such approaches, with a clear focus on three main topics: coarse graining strategies, adaptive resolution methods and variational approaches.
1. Coarse graining strategies
We will explore the current employed coarse graining strategies: in general, groups of atoms are clustered into one CG bead. Effective interaction potentials between the CG beads are then employed to define coarse grained force fields. In general, these force fields are over-simplified: by definition, integrating a number of degrees of freedom into a potential of mean force implies non-local and many-body interaction terms that cannot be handled efficiently by computer simulations. Simplified functionals (like, for instance, the MARTINI force field) have been employed and parameterized either by direct comparison with results from all-atoms simulations or by a more empirical approach based on a fit with experimental data.
We will focus on these and similar efforts related to proteins, membrane lipids, nucleic acids, solvents and polymers, in general. The idea is to open a discussion on the portability of these force fields in order to find a suitable protocol to tackle challenging systems in computational biophysics.
2. Adaptive resolution methods
The reduction in the number of degrees of freedom through coarse graining is achieved at the expense of the atomistic detail. Although this might be desirable for large portions of the biological systems under investigation, it is certainly true that biology depends on tiny details that cannot be sacrificed without loosing accuracy and predictive power. In recent years, several adaptive resolution methods have been proposed. The idea is to deal with different resolutions in the same simulation setup, namely employing coarse grained representations in regions that are important to determine the biomolecular mechanics and atomistic representations where the chemical detail cannot be neglected. We plan to compare the different approaches available and assess the validity of these methods for different biological systems and interfaces.
3. Variational approaches
Other interesting approaches bear some similarities with the force-matching method that is widely employed in the parameterization of new compounds and molecules. The idea is to obtain data on forces from detailed ab initio simulations and map these data onto the empirical force fields at the classical level. This can be achieved by employing residual functions that are minimized with a variational principle. This approach, in particular, has the potential to stretch the limits of coarse grained approaches beyond the discretization imposed by the employment of finite size beads and match the continuous description of finite elements.