The IT team building the systems supporting the Large Hadron Collider (LHC) have a lot of work to do in the next two years, as CERN prepares for the next big batch of experiments in 2015.
The LHC will not be in use again until then. Meanwhile CERN boffins analyse the most recent proton collisions, which they believe have proven the existence of Higgs Boson, the so-called “God particle” which explains why things have mass.
Fresh collisions will be launched in two years’ time as CERN looks into other phenomena, such as anti-matter. They will create a significant amount of additional data for the IT team to deal with.
When they need to, each of the site’s four particle detector hubs – ATLAS, CMS, ALICE and LHCb – take what amounts to 40 million pictures a second, producing 40 petabytes of information. Whilst not all of that information is kept – much of it related to already understood physics rather than interesting new particles – there is still 25GB a second that has to be thrown on standard discs and tapes. That’s real Big Data.
When the giant underground tube starts seeing particles smashing into one another again, the backend systems need to be ready for the extraordinary amount of data that will come through, whilst supporting the bespoke code that picks out the interesting collisions. Or as Tim Bell, infrastructure manager at CERN’s IT department, tells TechWeekEurope, it’s like helping to “find a needle in a haystack when you don’t know what a needle is”.
But the LHC does not have enough budget to build as many systems as CERN would like, partly thanks to the current European economic climate. Nor has it been able to bring in new workers, even where additional systems have been installed.
The agency already owns one significant private cloud, built across two data centres, whilst another two are operated by the experiment teams on server farms close to the detectors, which need to scale up and down when the LHC is switched on and off.
It needs more compute power though. CERN just opened a new data centre in Budapest, yet has not been given extra manpower to run any of the extra hardware. The team is now considering how to store and use its massive data sets more efficiently in public clouds.
As part of this effort, it is running an operation in partnership with Rackspace (in certain areas, CERN uses the OpenStack cloud orchestration software, which Rackspace developed with NASA) as part of CERN OpenLab. Working alongside a host of tech partners, the lab is a testing ground for new technologies, where the companies benefit from testing insane workloads on their systems and CERN gets to try out kit for free. Rackspace signed a deal with CERN in July to run a hybrid cloud based on OpenStack to see if such infrastructure would be suitable for future use.
“We use the extreme computing challenges of the LHC to identify areas where commercial companies and CERN are interested in research knowing sooner or later we’re going to need it,” Bell tells TechWeek.
Another option is to use the Helix Nebula, the European Space Agency-led initiative to create a dedicated science cloud. Indeed, CERN is already trialling workloads from ATLAS on the big public servers, which are run in datacentres owned by Atos, CloudSigma and T-Systems. Somehow, the Helix Nebula team, backed by European Commission, managed to convince a host of cloud players to sign up to the same contract with the same service-level agreements, allowing users to run whatever on whoever’s infrastructure they want. In the cloud industry, that’s rare.
Whilst the framework was chosen because there was no single cloud provider in Europe capable of running Helix Nebula, it is an innovative design, which should encourage some healthy competition between participants.
The ESA-led initiative is an intriguing prospect. Researchers will simply see a “blue box”, providing an interface to move projects between clouds. Recent pilots have shown it is working, and data is moving seamlessly across, Maryline Lengert, senior adviser at the ESA and leader on the Helix Nebula project, tells TechWeek.
“We’ve created this competitive environment but at the same time they’re working together. They know they have to compete but they share enough to be able to arrive at the level we want them to,” says Lengert. She notes even telecoms companies are playing nice with each other, just so they can take part in this massive project.
There’s a chance Helix Nebula won’t be working well enough by the time CERN’s collider is restarted. Lengert says she is hopeful it will be running properly when she retires – that’s in seven years’ time. The LHC switch-on is less than two years’ away.
“It’s not as complete as we would like it to be. But in principle, it’s working,” Lengert adds.
Despite all these innovative approaches to the cloud, there remains uncertainty for those recording the LHC’s findings. But for CERN, a cloudy future is in store. “It’s not clear where things will go, but what we’re certain about is that having a strong production cloud is going to be part of the solution,” adds Bell.
“Knowing that we’ve got to go that way, we might as well get started now before the LHC starts up again. Once we start getting 35-40PB a year coming in again, there’s only so much you can change. That’s why we need to move quickly.”
Think you know everything about supercomputers? Try our quiz!
Fourth quarter results beat Wall Street expectations, as overall sales rise 6 percent, but EU…
Hate speech non-profit that defeated Elon Musk's lawsuit, warns X's Community Notes is failing to…
Good luck. Russia demands Google pay a fine worth more than the world's total GDP,…
Google Cloud signs up Spotify, Paramount Global as early customers of its first ARM-based cloud…
Facebook parent Meta warns of 'significant acceleration' in expenditures on AI infrastructure as revenue, profits…
Microsoft says Azure cloud revenues up 33 percent for September quarter as capital expenditures surge…