Active Matter and Artificial Intelligence
- Giovanni Volpe (University of Gothenburg, Sweden)
- Fernando Peruani (Université Nice Sophia Antipolis, France)
- Klaus Kroy (Universität Leipzig, Germany)
- Frank Cichos (Universität Leipzig, Germany)
Biological active matter is composed of self-propelling agents, such as molecular motors, cells, bacteria and animals [1,2], which can perform tasks and feature emergent collective behaviors thanks to their capability of sensing their environment, processing this information and exploiting it through feedback cycles . These processes are intrinsically noisy  both at the microscale (e.g. thermal noise ) and at the macroscale (e.g. turbulence ). Therefore, through millions of years, biological systems have evolved powerful strategies to accomplish specific tasks and thrive in their environment – strategies that are encoded in their shape, biophysical properties, and signal processing networks .
Artificial active matter is now being explored as a powerful means to address the big challenges that our society is facing : from new strategies for targeted drug delivery, to the decontamination of polluted soils, to the extraction of energy from naturally occurring out-of-equilibrium conditions. In this context, biological active matter provides an ideal source of tested ideas and approaches [8,9], which we are now trying to exploit to develop artificial systems [10,11].
However, in biological systems, there is only a limited possibility to reduce complexity and introduce controllable perturbations. Therefore, the development of computational models and of proof-of-principle experiments provides an ideal test bench to explore the origin of complexity in biological systems and to harness it for the development of new applications. For example, tuning of sensorial delays yield different behaviors in gradient fields relevant for cellular systems , and, inspired by neuronal networks, relevant past experience is harnessed to predict the evolution of complex systems.
In this process, we have been led to the application of machine learning to active matter. Machine learning is an abstraction of the adaption processes found in biological active matter and researchers have recently started to explore these algorithms in active matter in some pioneering works. For example, reinforcement learning , which reflects a type of learning based on rewards, has been used to steer the motion of microscopic particles [15,16], to understand how birds can exploit turbulent thermal air flows to soar , to control the motion of artificial microswimmers in complex flow patterns  as well as in collective field taxis .
We are now at a critical crossroad in the development of active matter research where biological and artificial active matter are meeting with machine learning. The specific aim of this workshop is to bring together researchers from the fields of physics, biology, mathematics and machine learning to lay the groundwork of a scientific network to address the pressing questions:
1. What can machine learning do for biological active matter? Can we gain new insight into how powerful strategies have evolved? Can we understand the role of information processing, feedback cycles and sensorial delay in these strategies?
2. What can machine learning do for artificial active matter? Can we learn new approaches towards high-impact applications? For example, how can signaling and feedback be introduced into artificial active matter?
3. What insights can active matter provide for machine learning? Can we get some insight from the natural strategies optimized by evolution?
 Ramaswamy, S., The mechanics and statistics of active matter. Annu. Rev. Condens. Matter Phys. 1, 323–345 (2010).
 Marchetti, M. C. et al., Hydrodynamics of soft active matter. Rev. Mod. Phys. 85, 1143–1189 (2013).
 Katz, Y., Tunstrøm, K., Ioannou, C. C., Huepe, C., Couzin, I. D., Inferring the structure and dynamics of interactions in schooling fish. Proc. Natl. Acad. Sci. USA 108, 18720–18725 (2011).
 Yates, C. A. et al., Inherent noise can facilitate coherence in collective swarm motion. Proc. Natl. Acad. Sci. USA 106, 5464–5469 (2009).
 Kromer, J. A., Märcker, S., Lange, S., Baier, C., Friedrich, B. M., Decision making improves sperm chemotaxis in the presence of noise. PLoS Comput. Biol. 14, e1006109–15 (2018).
 Reddy, G., Celani, A., Sejnowski, T. J., Vergassola, M., Learning to soar in turbulent environments. Proc. Natl. Acad. Sci. USA 113, E4877–84 (2016).
 Bechinger, C. et al., Active particles in complex and crowded environments. Rev. Mod. Phys. 88, 045006 (2016).
 Pearce, D. J. G., Miller, A. M., Rowlands, G., Turner, M. S., Role of projection in the control of bird flocks. Proc. Natl. Acad. Sci. USA 111, 10422–10426 (2014).
 Bierbach, D. et al., Insights into the social behavior of surface and cave-dwelling fish (Poecilia mexicana) in light and darkness through the use of a biomimetic robot. Front. Robot. AI 5, 15 (2018).
 Buttinoni, I. et al., Dynamical clustering and phase separation in suspensions of self-propelled colloidal particles (2017).
 Qian, B., Montiel, D., Bregulla, A., Cichos, F., Yang, H., Harnessing thermal fluctuations for purposeful activities: the manipulation of single micro-swimmers by adaptive photon nudging. Chem. Sci. 4, 1420–1429 (2013).
 Mijalkov, M., McDaniel, A., Wehr, J., Volpe, G., Engineering sensorial delay to control phototaxis and emergent collective behaviors. Phys. Rev. X 6, 011008 (2016).
 Palmer, S. E., Marre, O., Berry, M. J., Bialek, W., Predictive information in a sensory population. Proc. Natl. Acad. Sci. USA 112, 6908–6913 (2015).
 Sutton, R. S., Barto, A. G., Reinforcement learning: an introduction. MIT Press, Cambridge (1998).
 Colabrese, S., Gustavsson, K., Celani, A., Biferale, L., Flow navigation by smart microswimmers via reinforcement learning. Phys. Rev. Lett. 118, 158004 (2017).
 Muiños-Landin, S., Ghazi-Zahedi, K., Cichos, F., Reinforcement learning of artificial microswimmers. arXiv 1803.06425v2 (2018).
 Gustavsson, K., Biferale, L., Celani, A., Colabrese, S., Finding efficient swimming strategies in a three-dimensional chaotic flow by reinforcement learning. Eur. Phys. J. E Soft Matter 40, 313–7 (2017).
 Palmer, G., Yaida, S., Optimizing collective fieldtaxis of swarming agents through reinforcement learning. arXiv 1709.02379 (2017).