Categories: InnovationResearch

Google Launches Betas Of New Machine Learning APIs

Looking to sell customers better tools for extracting value from large sets of unstructured data, Google has released beta versions of two new machine learnings APIs for its Google Cloud Platform.

The tools, Cloud Natural Language API and Cloud Speech API, are designed for digging in to gargantuan text and audio files and pulling out information on specified topics such as people, locations, dates and events.

This mean organisations can carry out large analyses of text and audio to produce fine-tuned information on customers or users.

“You can use it to understand sentiment about your product on social media or parse intent from customer conversations happening in a call centre or a messaging app,” explained Google on the Cloud Natural Language API product page.

“You can analyse text uploaded in your request or integrate with your document storage on Google Cloud Storage.”

British online supermarket and tech company Ocado said it’s already using the Natural Language API, and it’s a viable replacement to its own machine learning language analyser.

“NL API has shown it can accelerate our offering in the natural language understanding area and is a viable alternative to a custom model we had built for our initial use case,” said Ocado’s head of data Dan Nelson.

Speech

Google Cloud Speech API lets developers convert audio to text by applying neural network models in an API. Google said that the API recognises over 80 languages and variants.

“You can transcribe the text of users dictating to an application’s microphone, enable command-and-control through voice, or transcribe audio files, among many other use cases,” said Google.

“Enterprises and developers now have access to speech-to-text conversion in over 80 languages, for both apps and IoT devices. Cloud Speech API uses the voice recognition technology that has been powering your favorite products such as Google Search and Google Now.”

More than 5,000 companies signed up for Google’s Speech API alpha, including video chat app HyperConnect that uses Cloud Speech and Translate API to transcribe and translate conversations between people who speak different languages.

The Speech API also support word hints, meaning custom words or phrases by context can be added to API calls to improve recognition. An example of this may be in smart TV listening for ‘rewind’ and ‘fast-forward’.

Take our video game tech quiz here!

Ben Sullivan

Ben covers web and technology giants such as Google, Amazon, and Microsoft and their impact on the cloud computing industry, whilst also writing about data centre players and their increasing importance in Europe. He also covers future technologies such as drones, aerospace, science, and the effect of technology on the environment.

Recent Posts

UK’s CMA Readies Cloud Sector “Behavioural” Remedies – Report

Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector

4 hours ago

Former Policy Boss At X Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

7 hours ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

8 hours ago

FTX Co-Founder Gary Wang Spared Prison

Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…

9 hours ago