Developers building intelligent applications now have more Microsoft Cognitive Services to work with, the software giant announced at its annual Build developer conference in Seattle as part of its push to bring artificial intelligence (AI) into mainstream computing.
Microsoft Cognitive Services is a collection of APIs (application programming interfaces) that are infused with AI, which developers can then use to recognize faces or pick out speech from noisy environments. Since its debut two years ago during Build 2015, over half a million developers across more than 60 countries are using the APIs, claims Microsoft.
Now those developers can access four new Cognitive Services for a total of 29. Currently available in public preview, the four new additions are Custom Vision Service, Video Indexer, Custom Decision Service and Bing Custom Search.
Based on Microsoft’s machine learning neural network technologies, Custom Vision Service enables developers to create applications that recognize specific objects, animals and other content in images. Users can train the web service to seek out objects, setting the stage for systems that can automatically sort pictures or provide product identification services to retailers, according to the company.
Meanwhile, Video Indexer can pave the way for video services with intelligent search. It can detect faces, objects and sentiment, indexing those insights alongside spoken audio that has been transcribed and translated. Custom Decision Service can be used to deliver personalized content using a reinforcement learning approach that adapts over time for more engaging user experiences. Finally, the code-free Bing Custom Search service allows developers to deliver more focused and customized search.
“We also launched Microsoft’s Cognitive Services Labs, which allow developers to take part in the research community’s quest to better understand the future of AI,” said Harry Shum, executive vice president of Microsoft AI and Research, in a May 10 announcement. “One of the first AI services available via our Cognitive Services Labs is a gesture API that creates more intuitive and natural experiences by allowing users to control and interact through gestures,” he noted.
That gesture API is part of Project Prague, a software development kit (SDK) that is available to select developers who are part of a private preview. To detect hand gestures, Project Prague requires an Intel RealSense SR300 camera.
“The SDK enables you to define your desired hand poses using simple constraints built with plain language. Once a gesture is defined and registered in your code, you will get a notification when your user does the gesture, and can select an action to assign in response,” explains the experimental SDK’s homepage.
Microsoft has also upgraded its Language Understanding Intelligent Service (LUIS), which can be used to create applications that accept natural language input. In addition to new developer tools, LUIS now offers more accurate speech recognition with Microsoft’s Bot Framework and an increased number of entities and intents that the system can recognize.