Microsoft Simplifies Embedding Image Recognition Into Mobile Apps

Developers can now use Microsoft's Custom Vision Service to quickly add image recognitions capabilities to iOS and Android apps.

Microsoft

Key to stoking demand for Microsoft's intelligent services is to make it easier for developers to tap into the software maker's growing catalog of cloud-based artificial intelligence technologies. Accordingly, the company has added a code-free way of embedding Custom Vision Service models into applications.

Part of the Azure Cognitive Services suite, Custom Vision Service is a tool used for training, deploying and optimizing image classifiers, enabling computer vision capabilities that can identify objects in applications. In September 2017, Microsoft added mobile model support, allowing app developers to add real-time image classification functionality to iOS apps using Apple's CoreML format. Now, Android developers can get in on the act using TensorFlow from Google.

"Once you have created and trained your custom vision model through the service, it's a matter of a few clicks to get your model exported from the service. This allows developers a quick way to take their custom model with them to any environment whether their scenario requires that the model run on-premises, in the cloud, or on mobile and edge devices," blogged Joseph Sirosh, corporate vice president of Artificial Intelligence and Research at Microsoft. "This provides the most flexible and easy way for developers to export and embed custom vision models in minutes with 'zero' coding."

In addition, developers can use Custom Vision Service to create capable image classifiers with small-scale datasets and create compact models that can run offline on a smartphone and other mobile devices, claimed Sirosh. Microsoft is also working on expanding the service's mix of supported devices and export formats, he said.

Microsoft is no stranger to apps that can "see" the world around them.

In July 2017, the company released Seeing AI, an iOS app for the visually impaired that uses an iPhone's camera to describe a user's surroundings, including the people, objects and printed text that appear in signs and menus. Recently, Microsoft announced that it had come full circle, so to speak, having created a bot that can turn descriptive text into drawings.

Microsoft's cloud-computing rivals are working on vision-capable machines as well.

Google open sourced more of its machine learning computer vision technologies in June 2017, making them available to developers via its TensorFlow Objection Detection API. The API helps developers and researchers create systems that automatically detect and identify multiple objects contained in a single image. In August 2017, Box announced plans to integrate Google Cloud Vision into its online file storage and collaboration platform, enabling the automatic categorization of images uploaded to the service.

Meanwhile, Amazon is using its computer vision technologies to eliminate checkout lines at brick-and-mortar stores.

On Jan. 22, the e-commerce and cloud-computing giant officially opened Amazon Go, a cashier-less grocery store, a year later than originally planned. Located at 2131 7th Ave. in Seattle, the store uses a mobile app, sensors and cameras to track the items that shoppers place into their bags, automatically charging purchases to their Amazon accounts.

Pedro Hernandez

Pedro Hernandez

Pedro Hernandez is a contributor to eWEEK and the IT Business Edge Network, the network for technology professionals. Previously, he served as a managing editor for the Internet.com network of...