Greening The Data Centre – What You Need To Know
From HVAC to rack density to hot/cool aisles, eWEEK looks at the computing models and energy-saving practices to focus on for the biggest rewards
A lot of attention these days is being devoted to going green: Save the planet, buy a hybrid, recycle, put lights on timers, don’t waste paper and so on. All of these things will help the environment, but let’s come right out and say it: Going green makes sense when a business saves capital and resources by doing so. A warm feeling at night is not a compelling business reason for going green, but saving millions of dollars on power and HVAC sure is.
Indeed, many businesses have saved significantly by implementing environmentally friendly practices and trimming power consumption.
In 2009, organisations including IBM, Sun, the National Security Agency, Microsoft and Google announced that they were building green data centres.
The most recent announcement comes from IBM, regarding what it claims is the world’s greenest data centre – a project jointly funded by IBM, New York state and Syracuse University. Announced in May 2009 and constructed in just over six months, the $12.4 million (£7.6m), 12,000-square-foot facility (6,000 square feet of infrastructure space and 6,000 square feet of raised-floor data centre space) uses an on-site power generation system for electricity, heating and cooling, and incorporates IBM’s latest energy-efficient servers, computer-cooling technology and system management software.
The press release is filled with all sorts of flowery language about saving the planet and setting an example for others to follow, but about three-fourths of the way through we get to the bottom line: “This is a smart investment … that will provide much needed resources for companies and organisations who are looking to reduce both IT costs and their carbon footprint.”
How can you separate the wheat from the chaff when it comes to designing a green data centre? Where does the green-washing end and the true business case begin?
The first thing to do is to understand several key principles of data centre design. This ensures that you maintain a focus on building a facility that serves your organisation’s needs today and tomorrow.
Build for today and for the future. Of course, you don’t know exactly which hardware and software you’ll be running in your data centre five years from now. For this reason, you need a flexible, modular and scalable design. Simply building a big room full of racks waiting to be populated doesn’t cut it anymore.
Types of equipment – such as storage or application servers – should be grouped together for easier management. In addition, instead of cooling one huge area that is only 25 percent full, divide the facility into isolated zones that get populated and cooled one at a time.
Most data centres incorporate a hot aisle/cold aisle configuration, where equipment racks are arranged in alternating rows of hot and cold aisles. This practice allows air from the cold aisle to wash over the equipment; the air is then expelled into the hot aisle. At this point, an exhaust vent pulls the hot air out of the data centre.
It’s important to measure energy consumption and HVAC. Not only will this help you understand how efficient your data centre is (and give you ideas for improving efficiency), but it will also help control costs in an environment of ever-increasing electricity prices and put you in a better position to meet the increased reporting requirements of a carbon reduction policy.
There are currently two methods of measuring energy efficiency.
CADE (Corporate Average Datacenter Efficiency), developed by the Uptime Institute (now 451 Group), multiplies IT efficiency (asset utilisation times energy efficiency of those assets) by physical efficiency (space used times energy efficiency of the building). By this measure, larger numbers are better.
The measure I prefer to use is PUE (Power Usage Effectiveness), developed by The Green Grid. PUE is calculated by dividing the total utility load by the total IT equipment load. In this case, a lower number is better. Older data centres typically have a PUE of about 3 or 4, while newer data centres can achieve a PUE of 1.5 or less.