How Open Source Helped CERN Find the Higgs Boson
Boson Buddies
CERN is on its voyage to uncover the secrets of our universe. One of the biggest discoveries at CERN, besides the creation of the web, was the Higgs boson [1]. However, that discovery was not an easy feat. The Higgs boson lasts only for 1 zeptosecond (zs), the smallest division of time measured by humankind [2]: that's a trillionth of a billionth of a second (i.e., 10^-21sec). The discovery required super-advanced sensors to collect data from particle collisions; then, a massive infrastructure was needed to store and process the continuous stream of data generated by these sensors.
CERN has built massive machines like the Large Hadron Collider (LHC) that weighs more than 7,000 tons and spreads across 27 kilometers in Europe. More than 9,500 magnets in the collider accelerate the particle beams at almost the speed of light. The LHC is capable of creating up to 1 billion collisions per second – and it's going to get bigger. CERN is planning a massive upgrade with the High-Luminosity Large Hadron Collider (HL-LHC) project that will produce a 10 times increase in the amount of data between 2026 and 2036.
How much data is it in numbers? The sensors of a cathedral-sized collider generate more than 1PB of data per second, and it's only going to increase. Collecting, storing, and processing this amount of data creates unique challenges for CERN not seen anywhere else on earth.
To better understand these challenges, I sat down with Tim Bell, the group leader of the Computer and Monitoring group within the IT department of CERN. Bell is leading a team that manages the computer and monitoring infrastructure at the data center. He is also an elected member of the OpenStack Foundation board, which allows CERN to keep up with what's going on in the industry.
"One of the aims of the CERN collaboration with industry is to make sure that the
...Buy this article as PDF
(incl. VAT)