Microsoft’s Computational Network Toolkit (CNTK) 1.5, announced June 10, features a new method of wrangling the parallel-processing power of multiple graphical processing units (GPUs), such as those from Nvidia, to improve the software’s deep-learning capabilities.
Microsoft originally released CNTK in late January on GitHub, the popular online code repository. CNTK is an open-source software toolkit intended to help developers create deep-learning models that they can apply to artificial intelligence (AI) systems that understand speech and interpret images.
The update that was announced June 10 provides a major performance boost.
CNTK 1.5 features a new parallel-processing technique called Block Momentum that improves the scalability of the software’s machine-learning capabilities while maintaining accuracy. According to a chart on this blog post, Block Momentum improves performance on a 64-GPU cluster by a factor of more than 50.
In the post, Chris Basoglu, partner engineering manager at Microsoft Technology and Research, noted that CNTK 1.5 also “includes a revamped I/O architecture, including more flexible readers for text and speech, making it easier to input popular formats into the toolkit for deep-learning training. This saves users from having to write their own code to parse these formats themselves.”
CNTK’s library of standard components has been expanded to include Deep Residual Nets for Image Recognition and Sequence-to-Sequence with Attention, he added. Finally, the software’s network description language, called BrainScript, sports new features designed to help make it easier to program CNTK. BrainScript now “supports infix operators, nested variables and function definitions, recursive function calls, arrays and even lambdas,” a move that lifts some of the burden of dealing with complex structures, said Frank Seide, principal researcher and CNTK architect, in the blog.
In its pursuit of intelligent cloud services and ambient computing technologies, Microsoft has been increasingly focused on AI. Sometimes that pursuit goes in unconventional directions.
The software giant’s research arm unveiled Project AIX in March, a platform that uses the Lego-like virtual world of Minecraft to train AI agents. (Microsoft acquired Minecraft in 2014 for $2.5 billion.) The “mod”—short for a “modification” that alters the game world—can be used to create AI characters that learn how to navigate Minecraft’s game environment.
AIX is currently available to academic researchers invited to a private beta. Microsoft hopes to release AIX this summer under an open-source license.
Meanwhile, Nvidia is busy making overtures to AI startups. This week, the graphics card maker announced the Nvidia Inception Program, which provides entrepreneurs access to the company’s latest GPU hardware, Deep Learning SDK (software development kit) and DIGITS deep-learning GPU training system, among other perks and product discounts.
“Startups worldwide are taking advantage of deep learning for its superhuman speed and accuracy in applications like radiology, fraud detection and self-driving cars. We’re committed to helping the world’s most innovative companies break new ground with AI and revolutionize every industry,” Kimberly Powell, senior director of industry business development at Nvidia, said in a statement.