Data Centres Need Green Switches

As the proportion of power used by network switches in virtualised data centres increases, managers need to think about going green, says Stephen Garrison

“We are agnostic about hypervisors,” said Garrison. The switches support VMware, Xen, and Microsoft Hyper/V,  but they are not tailored to any particular solution: “Most users have at least two hypervisors in use.”

Although multiple hypervisors are the norm, vendors like HP and Cisco are trying to make a “closed system” where they can, he said. But they are going against the tide, because an ever-increasing proportion of the data centre is simply “rip and replace”.

“There is no loyalty,” said Garrison.  “If email servers fail, you just rip and replace, and bin the server.” The new servers come from whoever offers the best deal.

Network switches could go the same way of course, but he thinks there are still features that distinguish them in the data centre – though the primary one is the ability to support whatever the IT guys want to do on top of the network. “Data centres are planned at the application level, not the network level,” he said.

He sees a three-to-five year shift away from the incumbent (Cisco) model of a centrally planned network for the data centre, towards a more flexible version that fits virtualisation better, and he thinks that CIOs going into the the cloud will keep their wits about them and avoid lock-in.

“The cloud could be a virtual mainframe. Do we want a closed system, or do we want to figure it out and work out what we want?”

Data centres only need 40G for now

Although vendors like Cisco, Juniper and Brocade are very proud of their 100Gbps Ethernet products, Garrison reckons they are just “chest beating” and more or less irrelevant in the data centres for now.

“You will pay £200,000 for a 100Gbps line card,” he said. “A 40Gbps line card costs around £15,000.”

The fact is that 100Gbps is useful for service providers whose needs are insatiable, but is made from silicon that has not yet fallen in price sufficiently for the data centre market.

Add to this the fact that server speeds tend to go up by doubling, while Ethernet speeds go up by a factor of ten, and it’s clear that 40Gbps is plenty for now. “You want 10Gbps to the top of the rack, and a 40Gbps uplink,” said Garrison.

Stephen Garrison is speaking at the 360IT event in London on 22 September