Nearly two months after the release candidate of Microsoft’s open-source deep learning toolkit made the rounds, the company made it official this week by announcing the general availability of Cognitive Toolkit version 2.0 under an open-source license.
Formerly known as the Computational Network Toolkit (CNTK), this artificial intelligence (AI) toolkit includes a deep learning system based on Microsoft’s research on image and speech recognition. The software can also be used to improve search relevance, using conventional CPUs or GPUs (graphical processing units) from Nvidia.
According to a June 1 announcement, Microsoft Cognitive Toolkit 2.0 is suitable for production-grade AI workloads. New in this release is support for the Keras neural network library in preview. Designed for rapid prototyping, the Keras API (application programming interface) takes a user-centric approach to developing AI-enabled applications, helping users with little to no experience with machine learning to get results.
Also new are model compression extensions enabling image recognition and other Cognitive Toolkit models to run faster on standard servers, smartphones and embedded devices with more modest processing capabilities. Additionally, the toolkit includes Java language bindings used for evaluating models and support for Nvidia’s latest Deep Learning SDK (software development kit) and the company’s seventh-generation Volta GPU architecture, which packs 21 billion transistors providing the equivalent of 100 CPUs in deep learning workloads.
“The toolkit is part of Microsoft’s broader initiative to make AI technology accessible to everyone, everywhere,” stated the Redmond, Wash., software giant in its announcement. “In addition to the Cognitive Toolkit, developers can access a suite of cloud computing applications via Microsoft Azure such as easy to use and deploy machine-learning application programing interfaces, or APIs, via Microsoft Cognitive Services.”
Last fall, Microsoft embarked on an ambitious effort to democratize AI for the benefit of the world’s economies and society, said CEO Satya Nadella during Microsoft Ignite 2016. In addition to providing tools that developers can use to build their own AI applications, the company pledged to infuse practically its entire product portfolio, including Office 365 and Dynamics 365, with intelligence.
Naturally, Microsoft isn’t the only technology heavyweight that’s pushing AI into the IT mainstream.
Adding momentum to the role of AI in research, Google last month announced its TensorFlow Research Cloud (TFRC). Available free of charge to research projects, TFRC offers access to a cluster of 1,000 Cloud TPUs (tensor processing units) to accelerate large scale, computationally intensive machine learning workloads.
TFRC’s use won’t be restricted to university research labs, assured Zak Stone, product manager of TensorFlow at Google.
“The TensorFlow Research Cloud program is not limited to academia—we recognize that people with a wide range of affiliations, roles, and expertise are making major machine learning research contributions, and we especially encourage those with non-traditional backgrounds to apply,” wrote Stone in a May 17 blog post. “Access will be granted to selected individuals for limited amounts of compute time, and researchers are welcome to apply multiple times with multiple projects.”
Google offers a separate Cloud TPU Alpha program for businesses looking to use Google’s Cloud TPUs for proprietary or commercial purposes, added Stone.