The machine learning APIs will give Google Cloud customers a way to extract value from large text and audio files.
Google has released open beta versions of two new machine learning APIs to enable its cloud platform customers to perform data analytics on large text and audio files.
The new Cloud Natural Language API and Cloud Speech API are designed to give enterprises a better way to extract business value from large sets of unstructured data.
According to Google
, enterprises can use the Natural Language API to extract information on people, events, locations, dates and other data from text documents including news articles and blog posts.
The text analysis capabilities enabled by the API will allow organizations to do things like sentiment analysis against large blocks of text and to gather actionable information on products and customers from social media chatter, email, and chat.
"The new API is optimized to meet the scale and performance needs of developers and enterprises in a broad range of industries," a trio of Google product managers wrote on the company's Cloud Platform blog
this week. "For example, digital marketers can analyze online product reviews or service centers can determine sentiment from transcribed customer calls."
The new Cloud Speech API, meanwhile, is designed to give developers a way to generate text from audio clips. The API
, according to Google, gives developers a way to transcribe text from users' voice dictation in applications or to enable command and control via voice.
The API supports speech-to-text conversion for more than 80 languages and is based on the same voice recognition technology as that used in Google Now and Google Search, according to the product managers.
According to Google, more than 5,000 organizations signed up to participate in the early tests of Speech API including HyperConnect, a video chat application that is testing the viability of using the API to translate and transcribe conversations between people conversing in different languages.
Another company using the API is VoiceBase, which is testing the feasibility of predicting customer support outcomes from call recordings, the Google product managers noted.
In addition to introducing beta versions of the new APIs, Google this week also announced a new cloud-computing region for west coast customers based in Oregon. The center will offer Google Compute Engine, Cloud Storage and Container Engine services for customers in cities like San Francisco, Los Angeles, Portland, Seattle and Vancouver.
Enterprises in these cities should see application latency times reduced by 30 to 80 percent as a result of the new west coast region, Google said.
Google's announcement this week builds on the company's efforts to bring more machine learning technologies to customers of its cloud computing technologies. Earlier this year, at its GCP Next conference, the company outlined a vision
under which it plans to bring big data analytics technologies like BigQuery and machine learning capabilities to its cloud-computing customers to help them extract actionable information from large data sets.
At that time, the company had described the effort as combining more than 15 years of Google research and development in areas such as highly distributed computing, data management and machine learning.