IT Science Case Study: How Monash University Moved to OpenStack

Australia's Monash University needed to switch its legacy data center over to using Linux for a networking operating system. The company explains the process in this exclusive case study.

Monash.University.office

Here’s the latest example of a new occasional feature in eWEEK called IT Science, in which we look at what really happens at the intersection of new-gen IT and legacy systems.

These articles will describe industry solutions. The idea is to look at real-world examples of how new-gen IT products and services are making a difference in production each day. Most of them will be success stories, but there will also be others about projects that blew up. We’ll have IT integrators, system consultants, analysts and other experts helping us with these as needed.

We’ve published similar articles to these in the past, but the format is evolving. We’ll keep them short and clean, and we’ll add relevant links to other eWEEK articles, whitepapers, video interviews and occasionally some outside expertise as we need it in order to tell the story.

An important feature, however, is this: We will report ROI of some kind in each article, whether it’s income on the bottom line, labor hours saved, or some other valued business asset.

Today: Monash University of Australia

This article is about Monash University, which wanted to switch over to using Linux for a networking operating system. The company answered eWEEK’s questions in its own words here:

Name the problem to be solved: Monash University, one of the top eight research-intensive universities in Australia, needed to overhaul its legacy infrastructure to support its MASSIVE-3 supercomputer, one of the most powerful in Australia. Designed to support more than 1,000  researchers daily processing complex imaging data, including 3D X-rays and MRI scans, the Monash team realized they needed to build scalable, web-scale architecture to deliver MASSIVE-3’s capabilities to end users, and that this couldn’t be achieved by buying off-the-shelf solutions–especially for its networking. At the same time, the University retired the aging on-campus data-center that housed the majority of research computing systems.

Describe the strategy that went into finding the solution:  As a relatively young university trying to make its mark on the world stage, building MASSIVE-3 is a strategic priority for Monash. The innovation is to adopt one software-defined network solution for high-performance computing, out-of-band management, storage and OpenStack private cloud. The Monash team set to work on a complete modernization, ripping out old commoditized IT and replacing it with an OpenStack-based infrastructure, including deploying Cumulus Network’s disaggregated OS throughout its network infrastructure.

List the key components in the solution:

  • New network operating system: Cumulus Linux, the world’s most flexible open network operating system for bare metal switches, allowed Monash to automate, customize and scale using web-scale principles like the world's largest data centers, all whilst delivering to the key high-performance computing and data-processing requirements driven by science needs.
  • The bulk of Monash research infrastructure is now OpenStack and the fabric is largely Mellanox-based.

Describe how the deployment went, perhaps how long it took, and if it came off as planned: Monash eResearch that underpins MASSIVE-3 had almost 50 new network devices and 200 servers and associated services to relocate and connect into a brand new network. They had to run high-performance computing applications to cross the fabric and support analysis of very large data sets for medical imaging, and had two to three weeks to turn down their old data center, migrate all of the servers over and turn up their new data center. It required an incredible amount of planning to deliver this gigantic data center on such a tight timeline and with minimal downtime.  

To complete this process on-demand as new switches were installed into the new data center, they USB-deployed Cumulus to the switches and treated each switch as an island. Cumulus Networks helped come up with a custom network design, deployed it with the automation against the network, and developed ZTP scripting to bootstrap the switches in their fabrics as they deployed. All of this was provided as part of Cumulus Networks’ professional services engagement with Monash.

Describe the result, new efficiencies gained, and what was learned from the project:  Monash is more readily able to accelerate research by democratizing access, security and innovation with Cumulus Linux. Cumulus’ Linux heritage also helped bring a DevOps approach to network management; in particular, some basic network support and ops functions can be performed by research DevOps engineers.

Because of this, Monash has been recognized for many noteworthy accolades that are heavily cited in the OpenStack community.

Describe ROI, carbon footprint savings, and staff time savings, if any: The major difference for Monash following the Cumulus Networks’ deployment is the staff time savings seen as a result. Since deployed, it has been much easier for admins to service the increasing needs of researchers.

Other reference document:

Editor’s note:  If you have an IT Science story you’d like to share, email the author at cpreimesberger@eweek.com.

Chris Preimesberger

Chris J. Preimesberger

Chris J. Preimesberger is Editor of Features & Analysis at eWEEK, responsible in large part for the publication's coverage areas. In his 12 years and more than 3,900 stories at eWEEK, he...