AWS Uses Low-Density Containers To Cut Cooling
Amazon Web Services says its data centre custom containers are cost-effective, greener building blocks
Amazon Web Services is a growing business and has a constant need to expand and grow its data centres to support its expansion. There are currently three sites under development in Virginia and Oregon, in the US, and in Ireland, for Europe.
As a measure of this growth, James Hamilton, vice president and distinguished engineer for Amazon Web Services (AWS), has said that every day the company brings on stream as much compute power as it used in the first five years of its existence as the Amazon.com online book store.
The Mysterious Perdix Containers
AWS uses containerised units to offer a flexible and rapid means of expanding its resource. Microsoft is another supporter of the box-building approach – and this is no coincidence because Hamilton was instrumental in evangelising the concept when he was an employee at Microsoft.
The Amazon containers are surrounded by mystery. It has not revealed what they house and even the manufacturing of the Perdix units is a secret. The company responsible for supplying them, and for building the data centres, is known as Vadata – or more accurately Amazon Vadata. The Perdix name of the containers is derived from a character in Greek mythology who was skilled at inventing tools.
Hamilton is a pragmatic designer who bases his plans on his experience rather than the figures produced by vendors and analysts, who are largely influenced by the numbers produced by server manufacturers. Consequently, AWS does not use high density racks and limits the chassis count to 30 servers per rack, with a cost-per-server of $1,450 (£880) or less.
Power Is Not The Main Cost Centre
This may sound like a cheap and cheerful solution and counter-intuitive to Amazon’s green credentials but Hamilton argues against this with his own figures (see below). The general argument is that power is the main cost in a data centre and vendors advise that packing servers in tightly is the answer. This adds to the unit server costs but reduces the power bills.
Hamilton does not accept the argument. According to his own calculations he believes that servers are the largest cost (over half of the total monthly cost of a data centre). By spacing out the servers, it means that less expensive units can be used. It also has green benefits because the containers can be vented to the outside to allow ambient air-cooling.
Air-cooling, he argued, is inefficient and adds a lot of power expense. He therefore favours “air economisation” – or recognising when external air can be blown across the servers without cooling it first.
This is being used quite widely by data centres but Hamilton believes that using the lower-cost servers means that ambient air cooling can work better. Even during hot weather, the exhaust from a data centre at 115 degrees Farenheit (46°C) is substantially higher than ambient temperatures.
Too Hot To Trot Or Not
“If you combine two tricks – one is air-side economisation and the other is raise the data centre temperature – in some climates, you can get away without any process-based cooling at all. None. So there’s no air-conditioning systems in some modern data centres. Huge savings in cost of power consumption,” Hamilton said.
The trick is finding the right balance of servers, data collection, and a degree of courage, he added. The containerisation of the data centre pods also means that they are housed in their own corridor which is easier to cool.
Containerisation is becoming very popular with manufacturers for many other reasons. As a pre-configured installation, they maximise the amount of hardware sold into a deal and, from the customer viewpoint, a lot of the configuration can be done prior to installation – or at the factory stage.