Researchers from both the private and public sectors are conducting an experiment to see whether they can cut power costs in large computing environments such as a data centre, by removing the alternating current (AC).
According to the University of California, San Diego, engineers have installed a set of servers in the campus data centre to operate continuously on 380-volt direct current (DC).
They hope to be able to track the energy savings that can be achieved via a number of architectural changes and other efficiencies, including the use of DC power.
The use of DC power to save energy is not a new technique. In May 2009 for example, the Met Office in the UK announced that it was planning to upgrade to its high performance computing systems, and one of the techniques the Met Office hit on to make its supercomputers more energy efficient was to using direct current (DC) to power its servers rather than AC, in order to avoid the large losses of power during conversion from AC to DC.
“We take the power off the mains, put it through the UPS so it is goes to DC, convert it back to AC,step it up, step it down, move it around a bit, and then we take it down into the machines for the current required,” is how IT chief Steve Foreman described the normal processes for handling power.
Meanwhile, the Californian experiment is making use of a modular data centre on campus with sensors and other instruments to measure the energy efficiency of information and communication technologies, as part of the National Science Foundation-funded initiative known as Project GreenLight.
Indeed, the group is claiming that it estimates that companies could save billions of dollars each year in capital costs and ongoing energy savings by using all-DC distribution in their data centres.
Essentially, this is because traditional data centres are fed AC power at a high voltage which is converted to DC in the UPS system, in order to charge batteries and condition the power. The power is then converted back to AC in order to run the power supplies of the servers etc. The researchers believe that by skipping or consolidating the above conversion steps, would save considerable electricity usage overall in the power distribution chain and in cooling.
“Each conversion loses power and generates additional heat, both of which reduce the overall power and cooling efficiency of the server facility,” said William Tschudi, a DC power researcher at Lawrence Berkeley National Laboratory. “By providing DC power directly to the server facility, many conversion steps are bypassed and less heat is generated, leading to overall higher efficiency.”
Research like this is important given the pressures that data centres are under nowadays to save energy and cooling costs and improve efficiencies. Indeed, back in May analyst house Gartner warned that owners and operators of data centres will face more and more problems associated with power, cooling and space, thanks to the prediction of a significant increase in server sales during the next two years.
Indeed, Capgemini recently said that putting energy efficiency at the forefront of business will give long-term cost benefits to organisations.
CMA receives 'provisional recommendation' from independent inquiry that Apple,Google mobile ecosystem needs investigation
Government minister flatly rejects Elon Musk's “unsurprising” allegation that Australian government seeks control of Internet…
Northvolt files for Chapter 11 bankruptcy protection in the United States, and CEO and co-founder…
Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector
Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…
Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…