My very first hackathon!

Today, as part of the ATLAS Software & Computing Week - a long series of meetings where virtually every working group involved to some degree in the compsci affairs of ATLAS present their latest developments, brainstorm, set deadlines and consume gargantuan amounts of coffee - I was involved in a hackathon. Which I think is pretty cool (but maybe that’s just me).

As part of my qualification task, a side project I have to work on during my first year of PhD in order to qualify for ATLAS authorship, I help (in modest ways…) in the development of some piece of software used by the Simulation group. The Simulation group’s task is simple : simulate Monte Carlo events. A lot of them. These simulated events are crucial to the good running of the experiment on many levels, from making sure the data is consistent and the detector is properly calibrated, to generating the Standard Model (and beyond) predictions we’re trying to verify experimentally.

ATLAS detector layers

Not your average detector to simulate!

With the increase in luminosity and data-taking of the LHC, it’s only natural that the generation of Monte Carlo events should also be scaled up. Unfortunately, that’s where we run in a lot of problems : we are obviously limited in available computing power, disk space, CPU time, etc. What to do? Well, the people I work with for my qualification task are developing a new way of looking at simulation. Instead of generating everything from scratch (particles, interactions, reconstruction, simulation of the detector geometry, tracking…), they’ve decided to make a number of smart simplifications, relying on the data that’s already been collected (and the events that have already been simulated).

For example, instead of computing in unnecessary levels of detail the path of a low energy electron meandering in the calorimeter, why not use the average (sort of) of many such events that already happened? After all, these are well-understood processes that occur all the time. Once you start thinking like this about every component of the detector (but also of course implementing new, more efficient code!) you come up with what is called the Fast Chain. Chain, because it groups several individual such improvements; Fast, well, that’s the point, isn’t it?

As a side note, I have a poster presentation on this very topic at the upcoming national HEP IOP meeting. I’ll upload it a bit later.

Back to the hackathon. A group of twenty or so of us gathered at 10am in a conference room, in a somewhat remote location - behind the Antiproton Decelerator, of all places - and, after a couple of updates on recent developments, we got our marching orders. We split up in smaller groups, each with our own tasks, and started coding away. Sitting next to experts and working with them, in real time, towards a common goal was a truly enriching experience. All the while our progress was being monitored by group leaders, to make sure everyone’s improvements to the code were coordinated.

I’m particularly pleased I was able to meet my objectives in the allocated time (about 6 hours) - even though the problem I was working on is a “long standing” one, that will probably take some more effort in the upcoming weeks to completely solve. But at least we fixed a number of issues, created a new simulator (fancy), ran some simulations to validate it, and got new insight into a desastrous memory leak that we’ve been struggling to plug.

Good times.

comments powered by Disqus