Stanford Researchers Use GPUs To Create The World’s Largest ‘Virtual Brain’

Nvidia and researchers from Stanford University have created the world’s largest artificial neural network – a collection of processors designed to replicate the inner workings of a human brain.

Based on just 16 servers stacked with Nvidia’s high-performance GPUs, the Stanford project can handle about 6.5 times more parameters than the previous record-setting network – a 1,000 server, 16,000 core machine developed by Google in 2012.

The announcement was made at the International Supercomputing Conference (ISC) in Leipzig, Germany. At the same event, Nvidia revealed CUDA 5.5, an update to its parallel programming and computing model that, for the first time ever, features native support for ARM CPUs.

Intelligent machines

A ‘neural network’ virtually represents connections between billions of neurons in the brain. In most cases, it is an adaptive system that changes its structure as it ‘learns’. Such networks are used to study the processes responsible for recognition of objects, characters, voices and sounds.

Servidores-NVIDIA-They can help improve machine learning algorithms and get computers to act without the need for a specific program, moving us closer to the creation of true artificial intelligence.

Using methods radically different from those employed by Google engineers, the team led by Professor Andrew Ng of the Stanford Artificial Intelligence Lab based its neural network on just 16 servers packed with GPUs. This network was then capable of taking into account 11.2 billion parameters, as opposed to Google’s 1.7 billion.

According to Nvidia, the bigger and more powerful the neural network, the more accurate it is likely to be in tasks such as object recognition, enabling computers to model “more human-like behaviour”.

This might sound like science fiction, but the technology has clear business uses. For example, Nuance, the developer of popular language recognition solutions such as Dragon NaturallySpeaking, has been using GPU-accelerated artificial neural networks to “train” its software products to understand users’ speech by processing terabytes of audio data.

“Delivering significantly higher levels of computational performance than CPUs, GPU accelerators bring large-scale neural network modelling to the masses,” said Sumit Gupta, general manager of the Tesla Accelerated Computing Business Unit at Nvidia.  “Any researcher or company can now use machine learning to solve all kinds of real-life problems with just a few GPU-accelerated servers.”

Do you know how many raspberries are used in the making of Raspberry Pi? Take our quiz!

Max Smolaks

Max 'Beast from the East' Smolaks covers open source, public sector, startups and technology of the future at TechWeekEurope. If you find him looking lost on the streets of London, feed him coffee and sugar.

Recent Posts

Italy, White House Condemn ‘Discriminatory’ Tech Taxes

Italy, White House issue joint statement condemning 'discriminatory' tech taxes as US seeks to end…

5 hours ago

Italian Newspaper Hails ‘Success’ With AI-Generated Supplement

Italian newspaper Il Foglio says four-page AI-generated supplement published every day for a month shows…

5 hours ago

Huawei Updates Smart Glasses With Live Translation

Huawei launches Titanium edition of Eyewear 2 smart glasses with gesture controls and AI-powered simultaneous…

6 hours ago

Head Of Chinese Chip Tools Company Drops US Citizenship

Gerald Yin, founder, chairman and chief executive of key Chinese chip tools maker AMEC, drops…

6 hours ago

Intel Tells Chinese Clients Some AI Chips To Require Licence

Intel reportedly tells clients in China some of its AI chips will now require export…

7 hours ago

Intel Chief Flattens Leadership Structure

New Intel chief executive Lip-Bu Tan flattens company's leadership structure as he seeks to end…

7 hours ago