Firing a Particle Beam into a Vertical Slice of HGCAL (Or into a slice of a cake?)

Guest post by: Miloš Vojinović

So, what on earth is HGCAL anyway? 

Above: digital representation of the HGCAL

Buckle up, because I've got a story for you. Imagine being told, right as you step into this grand experiment, that you're diving into "one of the most ambitious projects of its kind... ever undertaken". That's what the smooth-talking recruiters fed me, and let me tell you, it's been about four years since then. Yet, despite my earnest attempts, I've yet to stumble upon anything that even comes close to dethroning that claim. But hey, don't bail on me just yet. Now that I've got your attention—hopefully—let's dive into this. Imagine gearing up for the High-Luminosity era of the LHC, where the game's about to change. The CMS experiment, that behemoth of science, is shaking things up by tossing out the old calorimeter endcaps and swapping in a fresh face: the High Granularity Calorimeter, or HGCAL for short. And oh, it's not just a fancy name. This thing's about to pull off some serious magic. Picture this: a whopping simultaneous 200 proton-proton collisions on average, smacking into each other every 25 nanoseconds. And you know what the HGCAL's going to do? It's going to snoop around, figure out where those collision bits went, when they collided, and how much energy they had. We're talking mind-blowing precision here, like knowing if something moved a mere ~1 mm, timed within ~10 picoseconds, and detecting energy deposits as delicate as ~1 Minimum Ionising Particle (MIP). Impressive, right? But here's the kicker. This 200 tonne gadget won’t be lounging around in some comfy lab. Oh no, it will be resting at -30° C in a downright hostile radiation environment, and it's in it for the long haul—about a decade's worth of operation. Now that, my friends, is some gutsy science.

Above: a photograph of Miloš

 

HGCAL Vertical Test Systems: Slices of a Cake!  

Picture this: HGCAL electronics - highly specialised and complex, involving multiple layers of data transfer, flaunting an impressive six million readout channels - demands meticulous testing. Approach? Divide and conquer: vertical (start-to-end) and horizontal (parallelisation) test systems. Now, imagine this: crafting a prototype vertical slice, a window into the future endcap electronics. That is what I have been working towards. It's like baking a tiny cake, one ten-thousandth the size, with all the final ingredients. A taste test on a small scale before unleashing the full culinary masterpiece. Below are the constituents of our HGCAL cake slice as when it was still in the pristine embrace of our lab's controlled chaos.

Above: Constituents of HGCAL


As for the final HGCAL detector, the test system consisted of the front-end (on-detector) electronics hardware (items on the blue mat) with prototype custom chips (or their emulators) and a custom back-end (off-detector) board (Serenity - the board in the bottom right) running custom firmware and software. By design, through the high-speed optical link, the back-end controls the front-end and acquires coarse (trigger) data with a frequency of 40 MHz and full resolution (event) data from the front-end with a frequency of 750 kHz, closing the loop.

Next up - testing the system in a realistic environment (the beam test) where we will fire real particles into our little detector and use it to measure them.

Countdown to the Beam Test: A World United

As the days ticked closer to the beam test, a gathering of somewhat more than 20 minds was in full swing. Imagine the scene: 3 continents, over 10 countries, all rounding up, each with their distinct set of skills. A united front eager to redefine the boundaries of our shared understanding. Quite the spectacle, indeed.

Above: members of the team working on the preparation for the beam test

We pieced together a test rig that was practically a mirror image of the one that would go into the beam area. In doing so, we swapped out those naked hexagonal PCBs with hexagonal silicon modules – a high-tech facelift, if you will. These were mounted on copper baseplates with a water-cooling system (just cold enough to avoid overheating). The front-end and the back-end of the setup were separated with an 80 m long optical fibre cable, to model the realistic latencies between them. In contrast to the setup previously laid out on the bench, the two hexagonal modules were also placed in a dark box with dry air, one behind the other, with about ~10cm worth of absorber material in front of them.

Above: component parts of the set up

At this point, we had a handle on controlling all the front-end devices from the Serenity back-end and, to bring both the trigger and event data from the front-end to the Serenity back-end. An external scintillator trigger system was added and we were able to trigger the event data readout on cosmic signals. But wait, not all was figured out just yet! A colossal challenge loomed: transferring the precious data from the Serenity to the DAQ PC for permanent storage was a chaotic and turbulent journey. This part of the readout was far from what one could call reliable.

Diving In: Installing the Experimental Setup in the Beam Area

We were all well aware that unless we could securely archive the data, our little experiment was headed towards a gloomy dead end. But, hey, we couldn't back down. The goal? Move everything into the beam area and reproduce what we previously had in the lab. Once done, we'd tackle the readout system glitches head-on. So, we meticulously labelled all the equipment, packed it up and with a stiff upper lip - went right for it. One of the more senior colleagues walked in with just the right T-shirt to inspire us on the day. This moment, despite being well hidden by everyone's enthusiasm, was nothing other than profoundly grim.

Above: moments from the beam test

A Crescendo of Happiness Amidst the White Nights of the Control Room

Dostoevsky’s "White Nights" refer to the phenomenon of extended daylight hours that occur in certain northern regions during the summer months, creating an atmosphere of suspended reality, where anything seems possible, and the boundaries between day and night, reality and imagination, become blurred. This was the ambiance in the control room. After three days in the trenches of our daring experiment, there was a glimmer of hope, a breakthrough: one module's readout was up and running reliably. Suddenly, a booming voice echoed through the control room: "BEAM!!!".

Above: the moment we first read out a module hit by a particle beam

That was 50% of what our crew set out to achieve. Next up: the readout of both modules. And let me tell you, it was brutal. We started doubting everything. Could it be that during beam tests 1+1≠2? However, our efforts were not in vain for on the fifth day, fueled by caffeine and robbed of sleep, with a symphony of rock’n’roll playing in the background, we cracked the code – the readout of both modules triumphantly commenced. Come day six, we were able to do so reliably with event rates during a Super Proton Synchrotron (SPS) particle spill peaking at 30 kHz. On the final day of our journey, we amped it up even further, reaching a solid and rather respectable rate of 100 kHz. 

Up to this point, we achieved most of what we originally set out to. However, there was one last item that we wanted to check – the so-called “MIP peak”. When no particles were hitting our setup, we would readout a base Analog to Digital Converter (ADC) count level (proportional to the deposited energy) from the silicon sensors. That corresponds to the horizontal yellowish line in the ADC versus time plot below. However, when shooting a beam of pion particles into our detector, another shape is expected to emerge – the so-called “MIP peak”, as high as about ~20 ADC counts above the noise level. Why? The detector is synchronous to a 40 MHz clock and the beam particles can arrive at any random time with respect to that. Hence, the particle signal is maximum at the right phase and smaller whenever the particles are not in coincidence with the sampling clock.

Above: graph showing beam particle signals

Parting Thoughts

In the past years, we've already danced with silicon modules in beam tests. So, we've developed a solid grasp of their standalone performance. But what sets this beam test apart, in a league of its own, is the debut of something truly remarkable. Ladies and gentlemen, it is the first time ever that the HGCAL collaboration used a complete vertical slice of the electronics readout (and control) chain in a beam test. From start to end. This introduced a variety of additional layers of complexity which will exist in the final system and did not exist in previous beam tests (e.g. distributing a timing reference from the back-end, orchestrating slow control exclusively from the back-end, front-end data concentration, optical readout and control, back-end data unpacking, reading out data off the Serenity board to a PC, etc.). 

Above: some of the HGCAL team

And as we wrap up this chapter of our journey, let us remember that in the grand symphony of progress, each tiny layer of complexity added contributes to the harmonious crescendo of discovery and innovation. Together, we venture onward, fueled by the pursuit of knowledge, and the excitement of the unknown awaiting us.

 

How about a slice of that cake?

graphic of a slice of cake

Disclaimer. While this beam test's findings pack a monumental punch for our collaboration, don't think the journey ends here. The data concentrator ASICs are not yet available on a single PCB and were emulated in an FPGA development board. The silicon modules come in two flavours of spatial granularity: high and low density. This beam test did not include the latter variant. Furthermore, in some regions of the detector, we won’t have complete hexagons but only partial modules, which are also not tested yet. HGCAL will have a different sensor technology in the hadronic section of the calorimeter, where plastic scintillators will be used instead of the silicon aimed for the electromagnetic section, which also needs to be tested using a full readout chain. On another note, we need to fully trigger the readout of high-definition data by looking at pictures of events in coarse detail and here we were still using an external trigger. Lastly, the event rate CMS needs to cope with during the High-Luminosity era of the LHC is 750 kHz on average, which is almost an order of magnitude away from the currently tested ~100 kHz. One could say that we may have not just yet gotten what we ultimately want for HGCAL, but we certainly got what we needed out of this beam test - and even more so.

 


Disclaimer: The views expressed in CMS blogs are personal views of the author and do not necessarily represent official views of the CMS collaboration.