Amazon Web Services is a growing business and has a constant need to expand and grow its data centres to support its expansion. There are currently three sites under development in Virginia and Oregon, in the US, and in Ireland, for Europe.
As a measure of this growth, James Hamilton, vice president and distinguished engineer for Amazon Web Services (AWS), has said that every day the company brings on stream as much compute power as it used in the first five years of its existence as the Amazon.com online book store.
The Amazon containers are surrounded by mystery. It has not revealed what they house and even the manufacturing of the Perdix units is a secret. The company responsible for supplying them, and for building the data centres, is known as Vadata – or more accurately Amazon Vadata. The Perdix name of the containers is derived from a character in Greek mythology who was skilled at inventing tools.
Hamilton is a pragmatic designer who bases his plans on his experience rather than the figures produced by vendors and analysts, who are largely influenced by the numbers produced by server manufacturers. Consequently, AWS does not use high density racks and limits the chassis count to 30 servers per rack, with a cost-per-server of $1,450 (£880) or less.
This may sound like a cheap and cheerful solution and counter-intuitive to Amazon’s green credentials but Hamilton argues against this with his own figures (see below). The general argument is that power is the main cost in a data centre and vendors advise that packing servers in tightly is the answer. This adds to the unit server costs but reduces the power bills.
Hamilton does not accept the argument. According to his own calculations he believes that servers are the largest cost (over half of the total monthly cost of a data centre). By spacing out the servers, it means that less expensive units can be used. It also has green benefits because the containers can be vented to the outside to allow ambient air-cooling.
Air-cooling, he argued, is inefficient and adds a lot of power expense. He therefore favours “air economisation” – or recognising when external air can be blown across the servers without cooling it first.
This is being used quite widely by data centres but Hamilton believes that using the lower-cost servers means that ambient air cooling can work better. Even during hot weather, the exhaust from a data centre at 115 degrees Farenheit (46°C) is substantially higher than ambient temperatures.
“If you combine two tricks – one is air-side economisation and the other is raise the data centre temperature – in some climates, you can get away without any process-based cooling at all. None. So there’s no air-conditioning systems in some modern data centres. Huge savings in cost of power consumption,” Hamilton said.
The trick is finding the right balance of servers, data collection, and a degree of courage, he added. The containerisation of the data centre pods also means that they are housed in their own corridor which is easier to cool.
Containerisation is becoming very popular with manufacturers for many other reasons. As a pre-configured installation, they maximise the amount of hardware sold into a deal and, from the customer viewpoint, a lot of the configuration can be done prior to installation – or at the factory stage.
Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector
Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…
Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…
Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…
Explore the future of work with the Silicon In Focus Podcast. Discover how AI is…
Executive hits out at the DoJ's “staggering proposal” to force Google to sell off its…