Categories: InnovationResearch

Google Launches Betas Of New Machine Learning APIs

Looking to sell customers better tools for extracting value from large sets of unstructured data, Google has released beta versions of two new machine learnings APIs for its Google Cloud Platform.

The tools, Cloud Natural Language API and Cloud Speech API, are designed for digging in to gargantuan text and audio files and pulling out information on specified topics such as people, locations, dates and events.

This mean organisations can carry out large analyses of text and audio to produce fine-tuned information on customers or users.

“You can use it to understand sentiment about your product on social media or parse intent from customer conversations happening in a call centre or a messaging app,” explained Google on the Cloud Natural Language API product page.

“You can analyse text uploaded in your request or integrate with your document storage on Google Cloud Storage.”

British online supermarket and tech company Ocado said it’s already using the Natural Language API, and it’s a viable replacement to its own machine learning language analyser.

“NL API has shown it can accelerate our offering in the natural language understanding area and is a viable alternative to a custom model we had built for our initial use case,” said Ocado’s head of data Dan Nelson.

Speech

Google Cloud Speech API lets developers convert audio to text by applying neural network models in an API. Google said that the API recognises over 80 languages and variants.

“You can transcribe the text of users dictating to an application’s microphone, enable command-and-control through voice, or transcribe audio files, among many other use cases,” said Google.

“Enterprises and developers now have access to speech-to-text conversion in over 80 languages, for both apps and IoT devices. Cloud Speech API uses the voice recognition technology that has been powering your favorite products such as Google Search and Google Now.”

More than 5,000 companies signed up for Google’s Speech API alpha, including video chat app HyperConnect that uses Cloud Speech and Translate API to transcribe and translate conversations between people who speak different languages.

The Speech API also support word hints, meaning custom words or phrases by context can be added to API calls to improve recognition. An example of this may be in smart TV listening for ‘rewind’ and ‘fast-forward’.

Take our video game tech quiz here!

Ben Sullivan

Ben covers web and technology giants such as Google, Amazon, and Microsoft and their impact on the cloud computing industry, whilst also writing about data centre players and their increasing importance in Europe. He also covers future technologies such as drones, aerospace, science, and the effect of technology on the environment.

Recent Posts

Craig Wright Sentenced For Contempt Of Court

Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…

2 days ago

El Salvador To Sell Or Discontinue Bitcoin Wallet, After IMF Deal

Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…

2 days ago

UK’s ICO Labels Google ‘Irresponsible’ For Tracking Change

Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…

2 days ago

EU Publishes iOS Interoperability Plans

European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…

3 days ago

Momeni Convicted In Bob Lee Murder

San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…

3 days ago