In the 1960s, Union Pacific became the first railroad in the world to schedule each shipment at the car level and automate the tracking of the cars when it brought in an IBM mainframe.
Forty years later, the railroad is in the midst of a multiyear project that will see a new distributed network, comprising thousands of x86 servers-primarily blades-running Linux, replace the mainframe.
After four decades and 11 million lines of macro assembler code, it was time to rethink how the railroad was operating, Lynden Tennison, senior vice president and CIO at Union Pacific, said in an interview.
“It’s been very effective running the railroad for a long time,” Tennison said of the IBM mainframe. “But things change.”
New business demands, the move to open-source technology, years of acquiring and absorbing other companies and their IT infrastructures, and the aging of the mainframe programmer population are among the key drivers for the change, he said.
At the end of the project, which will cost the railroad between $150 million and $200 million, Union Pacific-with its 8,400 locomotives and 32,000 miles of tracks that run through 23 states-will have an IT platform that is event driven, built via SOA (service-oriented architecture) and running Linux as the primary OS. Tennison said he hopes to shut down the mainframe in 2014.
There won’t be any big event signaling the end of the mainframe’s life, he said. Rather than make a massive shift over a weekend from the mainframe to the distributed platform, Union Pacific instead will spend the next five years adding and turning on new x86 systems so by the time the mainframe is turned off, the new, completed infrastructure already will be up and running.
The platform is being built in two data centers in Omaha, Neb., on blade systems primarily from Dell, though the railroad does have some systems from Hewlett-Packard.
“We intentionally want to keep it that way so that [the servers are not from] a single vendor,” he said. “We wanted to be a commodity-based infrastructure.”
The systems are running Red Hat Linux, with middleware from Oracle, Tibco Software and BEA Systems. Union Pacific had already been using applications from these vendors. Tennison said the middleware already worked well, and that given the size of the project, it was better not to introduce products from new vendors if they weren’t needed.
Union Pacific isn’t using much server virtualization, Tennison said. The IT platform is being built on blade servers that cost about $3,000, so it isn’t a challenge financially to add more servers as needed. The railroad also saw some performance problems several years ago when it started using VMware virtualization on large, eight-socket systems that cost $100,000 or more.
Now Union Pacific is focusing on smaller servers, many of them running Opteron processors from Advanced Micro Devices, though the railroad will use Intel-powered systems as well. Currently, most of the chips are quad-cores, though there are plans to begin using AMD’s six-core “Istanbul” Opterons.
In addition, whatever virtualization is being used is coming from the operating systems themselves, in particular Red Hat’s Linux virtualization capability and Microsoft’s Windows Server 2008 Hyper-V for Windows.
A key part of the infrastructure is a transportation system called NetControl, which will replace the mainframe-based Train Control System. Tasks such as order taking, monitoring and scheduling shipments and train schedules, and dealing with service interruptions will fall to NetControl.
NetControl is about a third complete, and already is handling some transaction duties, such as bill-of-lading. The new platform is more forgiving of such errors as misspellings than the mainframe, which is making the process faster and more efficient. Eventually Union Pacific will tie NetControl into a nationwide automated system-called Positive Train Control-which is designed to increase track safety and reduce collisions. The federal government wants that system in place by 2015.
Union Pacific also has switched to a new SAP ERP (enterprise resources planning) system that is now running on the distributed system.
Union Pacific Has History of Being Technologically Advanced
Tennison said Union Pacific has a history of being technologically advanced. Not only did it blaze trails with the mainframe four decades ago, but it also is bringing technology into the engines, which now are equipped with onboard computers, GPS and satellite communications capabilities, he said.
So it made sense that when officials saw that technological demands were changing, they were ready to make the significant investment to make such a shift in their data centers.
For example, Union Pacific officials saw that their mainframe programmers were getting older and that the vast majority of students coming out of college were trained on newer technologies and languages.
“When we brought the [mainframe] in here in the 1960s, a lot of the people [working on it now] came in with it,” Tennison said. “It’s getting harder and harder to find the right people to work on it. It’s a whole lot easier to find people skilled in new languages.”
IBM, CA, BMC Software and Unisys, among others, have worked hard over the past several years to attract younger programmers to the mainframe platform. In addition, IBM in particular has been aggressive in bringing new workloads-including Linux and Java-onto the big iron systems
For example, IBM Aug. 14 rolled out a new System z offering that includes seven integrated hardware, software and services packages for deployment of such enterprise workloads as data warehousing, risk mitigation and disaster recovery. Big Blue also unveiled programs designed to entice Sun Microsystems and Hewlett-Packard customers to IBM’s Linux mainframe platform.
Tennison said Union Pacific officials first began talking about moving to a distributed system four years ago, and built an engineering prototype designed to prove that the infrastructure could scale and handle the transaction workloads it would need to run.
Tennison and his staff also needed to convince Union Pacific officials that such a major project made sense for the railroad, a process that “took quite awhile,” he said.
Many data center technology vendors have services programs designed to help businesses plan such major projects. At an event in HP’s Marlborough, Mass., offices in July, officials with the company’s data center transformation solutions group spoke about the steps they take with clients as they plan out major data center projects. A key one is getting buy-in from both business executives and internal IT staff members.
Eventually both sides fell in step, he said. For executives, “the biggest thing was really showing them where we wanted to get,” Tennison said. A key for the IT staff was getting them the training they needed to be able to work effectively in the new environment, and to put in place recurring training programs to keep the IT folks up to speed.
He said he is designing the distributed environment to ensure that such issues as management and security don’t hamper the new platform. For example, Tennison is insisting that all IT people use the same tools and software.
“That eliminates a ton of management headaches that typically come with distributed environments,” he said.
Tennison said he and others in IT saw the promise of such infrastructures several years ago, and much of the technology already was in place. However, it wasn’t until companies like Google, Yahoo and Microsoft began building out their massive server farms that companies began moving in that direction in larger numbers. It also took SOA vendors awhile to move from talking about the model to actually offering solid products in that area.
“We had understood that we could do this,” Tennison said.
Now the railroad is five years away from completing that vision.