Categories: InnovationResearch

Google Launches Betas Of New Machine Learning APIs

Looking to sell customers better tools for extracting value from large sets of unstructured data, Google has released beta versions of two new machine learnings APIs for its Google Cloud Platform.

The tools, Cloud Natural Language API and Cloud Speech API, are designed for digging in to gargantuan text and audio files and pulling out information on specified topics such as people, locations, dates and events.

This mean organisations can carry out large analyses of text and audio to produce fine-tuned information on customers or users.

“You can use it to understand sentiment about your product on social media or parse intent from customer conversations happening in a call centre or a messaging app,” explained Google on the Cloud Natural Language API product page.

“You can analyse text uploaded in your request or integrate with your document storage on Google Cloud Storage.”

British online supermarket and tech company Ocado said it’s already using the Natural Language API, and it’s a viable replacement to its own machine learning language analyser.

“NL API has shown it can accelerate our offering in the natural language understanding area and is a viable alternative to a custom model we had built for our initial use case,” said Ocado’s head of data Dan Nelson.

Speech

Google Cloud Speech API lets developers convert audio to text by applying neural network models in an API. Google said that the API recognises over 80 languages and variants.

“You can transcribe the text of users dictating to an application’s microphone, enable command-and-control through voice, or transcribe audio files, among many other use cases,” said Google.

“Enterprises and developers now have access to speech-to-text conversion in over 80 languages, for both apps and IoT devices. Cloud Speech API uses the voice recognition technology that has been powering your favorite products such as Google Search and Google Now.”

More than 5,000 companies signed up for Google’s Speech API alpha, including video chat app HyperConnect that uses Cloud Speech and Translate API to transcribe and translate conversations between people who speak different languages.

The Speech API also support word hints, meaning custom words or phrases by context can be added to API calls to improve recognition. An example of this may be in smart TV listening for ‘rewind’ and ‘fast-forward’.

Take our video game tech quiz here!

Ben Sullivan

Ben covers web and technology giants such as Google, Amazon, and Microsoft and their impact on the cloud computing industry, whilst also writing about data centre players and their increasing importance in Europe. He also covers future technologies such as drones, aerospace, science, and the effect of technology on the environment.

Recent Posts

X’s Community Notes Fails To Stem US Election Misinformation – Report

Hate speech non-profit that defeated Elon Musk's lawsuit, warns X's Community Notes is failing to…

1 day ago

Google Fined More Than World’s GDP By Russia

Good luck. Russia demands Google pay a fine worth more than the world's total GDP,…

1 day ago

Spotify, Paramount Sign Up To Use Google Cloud ARM Chips

Google Cloud signs up Spotify, Paramount Global as early customers of its first ARM-based cloud…

2 days ago

Meta Warns Of Accelerating AI Infrastructure Costs

Facebook parent Meta warns of 'significant acceleration' in expenditures on AI infrastructure as revenue, profits…

2 days ago

AI Helps Boost Microsoft Cloud Revenues By 33 Percent

Microsoft says Azure cloud revenues up 33 percent for September quarter as capital expenditures surge…

2 days ago