Startup Showcase: Audio Analytic
Audio Analytic CEO Chris Mitchell talks about his firm’s vision for ‘Audio Intelligence’
What is your company and what do you do?
Audio Analytic is a sound recognition software company. We are pioneering an exciting area of AI that we call artificial audio intelligence. Simply put, we teach consumer technology to detect and recognise sounds and scenes in the environment so that it can respond intelligently through a better understanding of context.
We have created embeddable sound sensor technology that can detect, recognise and react to sounds; without depending on vast cloud computing resources but capable of using standard MEMs microphone components.
It detects a range of really important sounds, such as windows being broken, smoke and carbon monoxide alarms, baby cries and dog barks. We are constantly mapping the world of sounds around us so that we can help our technology evolve.
Tell us a bit about your career to date
Since childhood I have been fascinated by sounds and the way they are created. However my formal journey into identifying sound began with a PhD in music genre classification at Anglia Ruskin University in Cambridge.
This consisted of me teaching machines to understand music at quite a fundamental level – tempo, timbre, tone and so on – and trying to patch those constituents back up so the machine could tell if the music was jazz or dance.
I went onto develop my own technique for sound analysis using AI, which was applicable to any sound, not just music. That led to winning a Kauffman Fellowship to go to the US, to help with the commercialisation of technology.
After studying in the Bay area for a bit, I returned to the UK and set about investing my time into Audio Analytic. Money from a defence industry contract with MBDA in Stevenage, who were looking for new technology, meant I could make the company my full time work.
Investment from Cambridge Angels, along with regional development money enabled me to get Audio Analytic fully up and running. Once it was on its feet we started looking at the professional security arena. This caught the attention of companies in the smart home marketplace which were creating domestic security applications. At the beginning of 2017 we received further investment from local VC companies, which has enabled us to step up recruitment and investment.
Our current generation of products is focussed on meeting the demand in smart homes, but other markets are already opening, such as hearables, automotive and the mobile phone.
What services or products do you offer and how will businesses and/or consumers benefit?
Our sound recognition software gives products the human-like ability to recognise and react to the sounds around us, without the need for permanent connection to the cloud. Making consumer technology more intelligent, and not just connected, will open up lots of possibilities. That intelligence requires an understanding of context.
We are already working with some of the world’s biggest consumer technology brands to enable products and the personal assistants to be more helpful to people; increasing consumer safety, security, wellness and enjoyment through more intelligent and responsive products.
For example, I’ve got a one-year-old girl, and if she cries in the night I have a device that automatically turns on the hallway light, at a low level, so I don’t trip when I go to see her. This technology is about devices in the smart home that make your life a little bit simpler. It’s like your house is taking care of you a bit more.
Say one thing your company does that no one else can do?
When we started the biggest issue that we had was that the data didn’t exist. You can’t train your technology to listen to YouTube videos, as our technology needs to understand the ‘real’ sound. So we set out to map the world of sounds around consumers.
This meant building a data platform that enables us to tag and interrogate our data, as well as going out and recording sounds that we sometimes take for granted. In practical terms, this meant smashing hundreds of different types of windows, recording countless babies crying, setting off every conceivable smoke alarm and making lots of dogs bark.
Sounds are complicated things to teach computers to recognise. For example, if you compare them to voice and speech, the words and combination of words are limited by the type of sounds that the human mouth can produce, and speech follows a structure that makes it possible to map. As computers were originally designed to process repeatable tasks, teaching a machine how to recognise speech benefits from these pre-defined rules and prior knowledge.
Sounds, on the other hand, can be much more diverse, unbounded and unstructured than speech and music. We had to create our own language to describe sounds so that they could be analysed and understood.
We have been doing this for seven years, so have unrivalled expertise in understanding audio events, all of which are labelled, organised and analysed in a way that has never been done before.
The individual sound profiles that the machine has learned are then available to our software platform, called ai3, which our customers then embed into their consumer products. Because it was going to be embedded into a wide range of device types, we designed the software to require little memory and MIPS from the processor. What’s more, we also designed our software to run on the device, rather than needing to send audio up to the cloud for analysis. This is better from a privacy perspective.
Where is the company based and why?
Our headquarters are in Cambridge, and we have offices in Palo Alto because that is where a lot of our customers are. We are constantly expanding and developing and searching worldwide for specialist engineers and AI experts. Cambridge is a great place to find them.
The one thing that holds the company together is our love for sounds. It was the culture of sound that played a part in selecting the location for our office. There’s nothing richer than the sound of an urban environment and our new offices offer just that. We’re a Cambridge company too. We want to be true to our identity.
How big is your company and what are your technology demands?
We employ 40 people and took a new office space in early 2017 that will support our growth trajectory for the next couple of years.
Where do you see your company in five years
Very quickly we’ve seen demand from the market grow from the smart home to other device types where sounds play an important part. So five years from now I see our software in all manner of devices from cars to headphones and mobile phones and by that time the smart home will be very different from what it is today with so many intelligent and connected products helping us.
As well as taking our technology to new markets we’ll also continue to map the sounds and teach our technology to recognise them, so within each area you’ll see even more intelligent products come to market that take advantage of the human-like sense of hearing, as well as other sensing capabilities.
Which tech company do you admire and why?
Cambridge is a really dynamic high-tech environment with many really exciting start-ups. There are also notable large local companies such as ARM and others that don’t exist in name but their technology lives on, like CSR. No matter what stage these companies are at, they’ve all developed a great technology or better way of doing things, and have – or will – create fundamental new markets and industries for the UK. I’m influenced and inspired a lot by the local tech community, rather than any one company.