Continued from page 1
However, cloud is not the answer to everything, warned Hengeveld. There are still plenty of drawbacks to cloud computing, and he urges organisations to be wary of opting for a cloud model that excludes the possibility of in-house architecture in the long term.
“It’s absolutely 100 percent true, that the performance of a cluster that somebody owns is going to be better than the performance in the cloud,” he said. This is largely due to data locality. “Data has to go to where the compute is, in order for the data to be meaningful, and there is always a performance hit when you try and do that.”
Cloud architectures of course also introduce a number of security concerns, many of which can be overcome by owning your own systems. Hengeveld says cloud is an important vehicle for carrying out trials and removing the cost barriers to usage, but that companies should take a broader view in the long term.
“The picture we see for this is that people will use it, will try it, will engage with it, will benefit from it – and then at some point, when it makes sense for them from an ROI perspective, they’ll go ahead and buy a cluster,” said Hengeveld.
One important consideration with the move to exascale computing is the question of energy efficiency. In order for today’s fastest supercomputer in China, the Tianhe-1A, to achieve exascale performance, it would require more than 1.6 GW of power – an amount large enough to supply electricity to 2 million homes.
“Intel is investing a great deal in creating more energy efficient computation, so one of the key things you saw out of the exascale declaration is 100x the performance at 2x the power – that’s a big jump in energy efficiency, and so a big chunk of that comes from improving the energy efficiency of computation,” he said.
“A lot of that comes from process and a lot of that comes from architecture. So we’re investing a lot in studying and understanding the problem and making sure data centres are more power efficient as well as more powerful, because again, the more you can aggregate computation, the more social benefits you’ll get.”
While data centre managers and enterprises face different security and policy challenges with regard to HPC, both need to be able to manage their data, and make sure that it is not vulnerable – whether that data resides on in-house servers or in the cloud. If companies hope to succeed in attracting the ‘missing middle’ to high performance computing, they will have to make sure that their management policies are water-tight.
“Intel’s working very hard on improving the reliability and security of enterprise applications across the board, and their ability to work with cloud across the board. And that’s something that’s an important initiative for all of this. But in high performance computing the challenges are multiplied,” said Hengevald.
Page: 1 2
Explore the future of work with the Silicon In Focus Podcast. Discover how AI is…
Executive hits out at the DoJ's “staggering proposal” to force Google to sell off its…
US prosecutors confirm earlier reports, demand Google sells off Chrome web browser and end default…
Following Australia? Technology secretary Peter Kyle says possible ban on social media for under-16s in…
Restructuring expert appointed to oversea Northvolt's main facility in northern Sweden, amid financial worries
British competition watchdog decides Alphabet's partnership with AI startup Anthropic does not qualify for investigation