... if you are a researcher in physics or astrophysics and you are working with machine learning, that is.

Between September 23 and 25 - just when summer is over - we will meet in Valencia, Spain, to discuss the latest developments in deep learning applications to optimization of experiments in fundamental science. This is the fourth workshop of the MODE Collaboration, which focuses on a new frontier of application of deep learning: co-design and high-level optimization, and the tools to pull it off.




If you have been doing research in fundamental science for less than a decade or so, you may not have witnessed just how dramatically has the field changed after 2012, when two coincidental breakthroughs took place in computer science and particle physics. The former was the advent of deep learning, which first obtained super-human performance on classification of images; the latter was the discovery of the Higgs boson, which was powered by machine learning technology. From your limited perspective you may think that machine learning has always been a tool keenly adopted by researchers. But it wasn't: while a few did play with neural networks already forty years ago, they were marginalized and not really esteemed for their work, which was considered odd and out-of-scope of fundamental research.

Today the opposite is true, and for a good reason: we cannot ignore the power that these algorithms have put in our hands. However, power comes always with a responsibility of putting it to good use. And we have been trying to do that in good faith. Indeed, since 2012 we have been using deep learning for every possible kind of supervised learning task we face in the analysis of our data: classification of different physics processes or flavor tagging of energetic jets at a particle collider, of stars or galaxies in astronomical data, or of interaction signatures in neutrino telescopes; regression of parameters of interest in multidimensional data; and everything in between. More recently, semi-supervised and unsupervised learning tasks started to be exploited for more complex applications, including generative models used for fast simulation, and anomaly detection. Each of these separate topics has in fact become a sub-field of its own, with dedicated workshops on their application to fundamental science research problems.

But using deep learning for analysis tasks is very restrictive - it is a bit like owning a dune buggy and only using it to commute to work every morning. Our problem is that the sheer scale of our experiments today is intimidating. The idea that it has become theoretically possible to study the interplay of detection hardware and reconstruction software in a systematic, continuous way as a function of the thousands of parameters that determine how our experiments are built and how we extract information from their output, is far out of our comfort zone. But that is precisely where innovations are possible!

Co-design of hardware and software is increasingly happening in market-driven applications. Of course, there are large resources there, and large profits to make. In fundamental science there are similar potential large profits, but it is not money that flows in our pockets - rather, it is a much more evanescent reward, which is much more loosely correlated with the possibility to deploy resources for it. That is the challenge we are facing today, and it is the reason of existence of the MODE Collaboration - an effort toward a more effective, general use of deep learning in fundamental science.

The MODE workshop will bring together experts in computer science and physicists, as well as mathematicians. In fact one of our keynote lecturers is an applied mathematician of fame, Prof. Andrea Walther, who back in 2008 published the book "Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation". This is a main reference for the engine under the hood of most deep learning algorithms, the technique that allows us to consider and solve optimization problems featuring hundreds of free parameters.

The other two keynote lecturers come from computer science and physics. The first is Danilo Rezende, who is the leader of generative models at Deep Mind. In his talk we will hear the latest developments from the forefront of research. And then there is Prof. Riccardo Zecchina, who is a theoretical physicist at the Bocconi University in Milano, whose research is at the crossroads of physics and computer science as well.

The workshop will feature sessions on several areas of scientific research, and for each of them we will look at the latest developments of tools for optimization and use cases. So, besides the obvious session on high-energy physics applications, we will discuss astroparticle and neutrino physics, nuclear physics, but also muography and medical applications. A session dedicated to computer science developments will go into the technical aspects of the tools that we use for these optimization problems.

We will also have a poster session where younger participants may display their recent research; the best posters will receive prizes and relative attestations. So please submit an abstract for a poster if you plan to come! I do hope I will see many young researchers in Valencia - these workshops are a great place to establish connections that enrich our research networks and prepare for larger-scale projects!