Stanford Researchers Use GPUs To Create The World’s Largest ‘Virtual Brain’

Nvidia and researchers from Stanford University have created the world’s largest artificial neural network – a collection of processors designed to replicate the inner workings of a human brain.

Based on just 16 servers stacked with Nvidia’s high-performance GPUs, the Stanford project can handle about 6.5 times more parameters than the previous record-setting network – a 1,000 server, 16,000 core machine developed by Google in 2012.

The announcement was made at the International Supercomputing Conference (ISC) in Leipzig, Germany. At the same event, Nvidia revealed CUDA 5.5, an update to its parallel programming and computing model that, for the first time ever, features native support for ARM CPUs.

Intelligent machines

A ‘neural network’ virtually represents connections between billions of neurons in the brain. In most cases, it is an adaptive system that changes its structure as it ‘learns’. Such networks are used to study the processes responsible for recognition of objects, characters, voices and sounds.

They can help improve machine learning algorithms and get computers to act without the need for a specific program, moving us closer to the creation of true artificial intelligence.

Using methods radically different from those employed by Google engineers, the team led by Professor Andrew Ng of the Stanford Artificial Intelligence Lab based its neural network on just 16 servers packed with GPUs. This network was then capable of taking into account 11.2 billion parameters, as opposed to Google’s 1.7 billion.

According to Nvidia, the bigger and more powerful the neural network, the more accurate it is likely to be in tasks such as object recognition, enabling computers to model “more human-like behaviour”.

This might sound like science fiction, but the technology has clear business uses. For example, Nuance, the developer of popular language recognition solutions such as Dragon NaturallySpeaking, has been using GPU-accelerated artificial neural networks to “train” its software products to understand users’ speech by processing terabytes of audio data.

“Delivering significantly higher levels of computational performance than CPUs, GPU accelerators bring large-scale neural network modelling to the masses,” said Sumit Gupta, general manager of the Tesla Accelerated Computing Business Unit at Nvidia.  “Any researcher or company can now use machine learning to solve all kinds of real-life problems with just a few GPU-accelerated servers.”

Do you know how many raspberries are used in the making of Raspberry Pi? Take our quiz!

Max Smolaks

Max 'Beast from the East' Smolaks covers open source, public sector, startups and technology of the future at TechWeekEurope. If you find him looking lost on the streets of London, feed him coffee and sugar.

Recent Posts

UK’s CMA Readies Cloud Sector “Behavioural” Remedies – Report

Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector

7 hours ago

Former Policy Boss At X Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

9 hours ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

11 hours ago

FTX Co-Founder Gary Wang Spared Prison

Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…

12 hours ago