An Intel executive has reportedly confirmed that the chip giant is not interested in pushing its energy-efficient Atom processor for the server space.
Kirk Skaugen, vice president and general manager of Intel’s Data Centre Group, said in an interview with IDG News that while there are some vendors that are using Atom chips in server designs, and chip designer ARM is looking to push its processor designs into the data centre, most businesses are looking for systems with the power and energy efficiency of the latest Xeon chips.
At its developer forum last month, Intel showed off its upcoming “Sandy Bridge” microarchitecture, which promises to ramp up both the performance and energy efficiency of Intel’s processor offerings.
However, while Intel may not be looking to position Atom in the mainstream server market, researchers with Intel Labs here are working on creating compute clusters of smaller, Atom-based devices that can run some workloads while driving down power consumption.
“Power is becoming a significant burden,” Intel researcher Michael Kaminsky said in an interview with eWEEK during the open house. Through Project FAWN, Intel is trying to “reduce energy consumption two or three times for data-intensive workloads.”
FAWN could hold the promise of creating energy-efficient clusters that could run particular web 2.0-style workloads in a much more energy-efficient way.
During the event, Kaminsky showed a line of systems boards that could be networked together to create a compute cluster. Each board included an Atom chip and Intel SSD (solid-state disk) for local storage, items that he noted can be bought and put together by anyone.
The key is getting the cluster to work in the most efficient way and developing the techniques that will enable software to work well in such a highly parallel environment, Kaminsky said.
There are several areas of exploration within the FAWN project. One is load balancing, a key to ensuring the ability to scale the performance within the cluster. The FAWN-KV (key-value) storage system uses one or more fast front-end nodes that essentially route request to other back-end nodes, according to Intel Labs. Research results indicate that a fairly small cache can ensure proper load balancing and performance scalability.
Another area of research, dubbed WideKV, is looking at how to more efficiently and consistently replicate data between multiple data centres, according to Intel. In addition, Intel Labs is looking at algorithms that would improve the performance of the Map-Reduce paradigm of parallel programming that is common in cloud computing environments on FAWN nodes, a move that is taking advantage of the strong random-read performance of SSDs and would further increase energy efficiency.
The FAWN project also is looking at ways to reduce the memory footprint in the cluster.
Much of the effort now is around software, Kaminsky said. Most current applications are not designed to run in such resource-constrained, energy- and memory-efficient, and highly parallel environments.
Intel Labs and Carnegie Mellon researchers are looking at techniques that can be used to create applications that can take advantage of clusters such as those in the FAWN project, such as reducing the memory footprint of the software and operating systems, Kaminsky said.
Northvolt files for Chapter 11 bankruptcy protection in the United States, to help it restructure…
Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector
Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…
Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…
Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…
Explore the future of work with the Silicon In Focus Podcast. Discover how AI is…