If Facebook officials have their way, their Open Compute Project will go beyond servers and power supplies, touching on every aspect of a data centre’s infrastructure.

The initiative kicked off in April when Facebook open-sourced the server and data centre specifications the social networking giant employed in building its data centre in Prineville in Oregon. The project has since enrolled an impressive array of members, from Intel, Asus and Rackspace to Mellanox, Huawei and Red Hat, not to mention a few research and education institutions.

Spreading the initiative

It is an indication of the various directions in which the project is rapidly moving, Amir Michael, hardware design manager at Facebook, said in an interview with eWEEK during the recently concluded SC 11 supercomputing show in Seattle. Facebook is already moving forward with the next generation of the custom servers it has designed, Michael said.

At the same time, project members also are looking to tackle other aspects of the data centre, including systems management, storage and I/O. The push in these directions will help create the momentum to solve that key issues that Facebook officials saw when looking at data centre technology – that, in a broad way, proprietary products from large and small vendors alike could address some of the mainstream needs that are present in most enterprises, but often do not meet the unique demands a particular business may have.

“One of the things we saw as a problem [with server makers] was understanding what a customer’s requirements are,” said Michael, who presented a talk on Facebook’s data centre work and the Open Compute Project at the SC 11 show. “So we said, ‘Well, here it is, these are our requirements’.”

Growing green roots

About two years ago, Facebook engineers set out  to start designing their own servers using standard off-the-shelf technologies. Up to that point, the company has been using systems from traditional OEMs. Facebook worked with chip makers Intel and Advanced Micro Devices, as well as systems makers Hewlett-Packard and Dell, to create the custom servers.

The aim was to build systems that offer the performance needed to run a fast-growing social network with 800 million-plus members while keeping down capital, power and cooling costs in the densely populated data centres. The Facebook-developed systems are 1.5U (2.65 inches) tall – rather than the more traditional 1U (1.5 inches) servers – which, among other positives, makes for better air flow and lower cooling costs, Michael said.

There is no paint or logos that are found on servers from OEMs – which not only reduces the capital costs, but also makes the systems lighter – there is a more energy-efficient power supply in place and they are easier to service, with tool-less components, from fans to power supplies.

The Oregon facility also uses outside air to keep the systems cool, rather than running expensive chiller units, Michael said.

Energy efficiency benefits

The result of the work was a 38 percent increase in energy efficiency at the Oregon facility at a lower cost of 24 percent as compared with Facebook’s other data centres, he said. The data centre also has a power usage effectiveness (PUE) ratio of 1.07. The PUE ratio is a way to measure how efficiently a facility uses its energy; the closer to 1.0, the better. The Environmental Protection Agency has a standard PUE rate of 1.5.

Facebook expects to get similar results as it builds new data centres, Michael said. Last month, company executives said they plan to build their next data centre in Lulea, Sweden, just on the edge of the Arctic Circle, to serve users inEuropeand other regions. The site was chosen for its cold air and access to hydroelectric power.

The company also is working on its next generation of servers, which will include such technologies as an Intelligent Platform Management Interface (IPMI) and the ability to reboot on the LAN. They also will continue to be powered by Intel and AMD chips, though Michael said the company also is keeping an eye on other chips, including those from ARM Holdings. ARM-designed chips from the likes of Nvidia, Qualcomm and Samsung are found in most smartphones, tablets and other mobile devices, but the company also is looking to move up the ladder and into low-power servers.

“We’re always interested in whatever CPU works best,” he said.

Working group developments

Facebook officials are also interested in building upon what comes out of the various Open Compute Project working groups which will focus on storage, systems management and interconnection technologies, Michael said. The company has never intended to run the project, he said; instead, the hope is that the community will evolve to the point where Facebook is just another participant that can take advantage of the open technologies that come out of it.

Facebook’s decision to open up its hardware specifications in April was a significant change for an industry where other businesses, such as Google and Amazon, have closely guarded their data centre specs, using them instead as a competitive advantage. However, Facebook officials saw an open community as the way to faster innovation and more product options.

On 27 October, the Open Compute Project announced it was forming a foundation to lead the effort, with directors and advisers coming from such places as Arista Networks, Facebook, Rackspace and Intel, as well as a mission statement and guiding principles. In a blog post on the project’s Website, Frank Frankovsky, director of hardware design and supply chain at Facebook, said he was surprised at the level of enthusiasm for the idea since it was announced in April.

“A great deal of work remains to be done,” Frankovsky said. “We need to continue to grow the community and enable it to take on new challenges. We need to ensure that, as the community evolves, it retains its flat structure and its merit-based approach to evaluating potential projects. And we need to keep the community focused on delivering tangible results. What began a few short months ago as an audacious idea – what if hardware were open? – is now a fully formed industry initiative, with a clear vision, a strong base to build from and significant momentum. We are officially on our way.”

Jeffrey Burt

Jeffrey Burt is a senior editor for eWEEK and contributor to TechWeekEurope

Recent Posts

UK’s CMA Readies Cloud Sector “Behavioural” Remedies – Report

Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector

12 hours ago

Former Policy Boss At X Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

15 hours ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

16 hours ago

FTX Co-Founder Gary Wang Spared Prison

Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…

17 hours ago