Open Neural Network Exchange (ONNX), the open-source format for artificial intelligence framework interoperability, is now production-ready, according to project partners Microsoft, Facebook and Amazon Web Services.
The version 1.0 release is a major milestone in helping to make the AI software ecosystem more accessible to developers and the enterprises that ultimately deploy their systems.
Originally co-developed by Microsoft and Facebook, ONNX resolves the differences among different AI frameworks, giving developers to quick access to one that best suits their application requirements while avoiding time-consuming, framework optimization integration tasks.
Now, a few short months after its unveiling in September, ONNX is ready to help businesses fast-track their AI development efforts.
“We have incorporated many updates and enhancements, including feedback from the community since the initial announcement to make it work for many AI applications, including vision,” said Microsoft principal program manager Prasanth Pulavarthi, in a Dec. 6 announcement. ONNX also includes the ONNX-ML profile which supports classic machine learning in addition to deep learning, Pulavarthi stated.
Meanwhile, Facebook and AWS are releasing ONNX support for the Caffe2, PyTorch and MXNET frameworks, added Pulavarthi.
Currently, ONNX is best suited for AI systems that interpret visual information. But that will soon change, according to Facebook.
“For future versions, we are working together with ONNX partners and community to expand ONNX to represent scenarios beyond vision, including more dynamic models that occur in areas like language modeling,” wrote Facebook’s Sarah Bird, a technical program manager, and Dmytro Dzhulgakov, a Software Engineer, in Facebook Research blog post.
Cloud computing service providers aren’t the only ones that are banking on ONNX. The format is also taking off in the hardware space.
“Numerous hardware partners including Qualcomm, Huawei and Intel announced support of the ONNX format for their hardware platforms, making it easier for users to run models on different hardware platforms,” noted Sukwon Kim, a senior product manager at AWS Deep Learning, in a separate announcement.
AMD, Arm and IBM are also among the major IT companies lending their support since ONNX’s September debut. On Dec. 4, Nvidia announced ONNX compatibility on its TensorRT development container within the Nvidia GPU Cloud (NGC) services suite. TensorRT is a deep learning inference runtime system used to optimize and deploy neural networks.
ONNX backers IBM and Nvidia made waves this week with the introduction of the IBM Power System AC922 system for AI and deep learning workloads. Its includes the new Power9 processor, Nvidia GPUs and a high-performance interconnect subsystem that can keep pace with the demands of resource-intensive and speed-sensitive AI workloads.
The subsystem combines PCI-Express 4.0, the OpenCAPI architecture and Nvidia’s latest NVLink interconnect technology, which accelerates data movement between GPUs and CPUs. IBM claims the setup can push data 9.5 times faster than PCIe 3.0-based x86 systems powered by Xeon server chips from Intel.