Fast Chain Friday

This is Day 5 of my outreach week challenge : between state-of-the-art computing and that’s-good-enough-for-me physicists, how do plan on running LHC analysis in the future?

Future challenges

On Monday I explained why Monte Carlo samples are essential to the kind of physics analyses we do at the Large Hadron Collider. The need for computing power in producing MC samples grows with two factors : the use of more precise calculations and algorithms, and the luminosity of recorded data. By luminosity, particle physicists mean the number of interactions that take place at any given time in some area (say, just around the collision points of the LHC).

LHC schedule

The LHC was built with a lifetime of at least 30 years (after which it could be used as a booster to another, gargantuan collider), ultimately hoping to record hundreds of time more data than it already has. But of course, like any other machine, it needs maintenance - especially when you consider that the purpose of a detector is to be damaged, so as to record the interactions of the produced particles with the matter it’s made of! Accordingly, you see regular shut-down periods where maintenance checks are performed. Longer dedicated periods also happen during winter, when the electricity consumption of the LHC would drive the prices wild in nearby Geneva…

On top of that, there are so-called long shut-downs (LS1, LS2, etc.) during which parts of the detectors, the accelerator and the magnets are completely revamped and upgraded. The goal : more data. Colliding always more particles, at higher energies. This is translated as an increase in luminosity from one Run to the other :

LHC luminosity

I’ve discussed the concept of finiteness in building ever larger circular colliders before, but the problem is even more stringent when it comes to producing MC samples using the available (or near-future-available) computing resources : more luminosity not only means more collisions happening as the main event, but also more collisions happening in the background (what we call pile-up), all of which need to be simulated appropriately to render an accurate event that can be used in conjunction with data. This turns into an exponential (and explosive) increase in CPU needs, which we simply can’t handle just by having more, more powerful machines.

Enter the ATLAS FastChain

What the good folks in the ATLAS Simulation Group (including myself, recently) have been working on in the past few years is a brand new way of doing those simulations. By using a clever mix of data-driven techniques (we now have a few years worth of data, why not use that as a benchmark?) and physics-motivated approximations (both at the interaction and detection level), they are able to bring the simulation time to a few seconds per event (vs. the current few minutes), achieving speed-up factors sometimes close to 3000!

On top of that, the creation of the Integrated Simulation Framework (ISF) allows the user to choose any combinations of the various components that go into producing a Monte Carlo sample. Then, depending on where the accuracy is needed, and where shortcuts can be taken, full vs. fast simulation is appropriately selected. All in all, every user should see some amelioration - a little for very precise samples (Higgs, BSM, etc.), a lot for more common or well-known physics (ttbar, muons, etc.).

Here’s a poster I made about it for the 2017 IOP annual HEPP/APP conference :

This browser does not support PDFs. Please download the PDF to view it: Download PDF.</p> </embed>
comments powered by Disqus