Streamlining Information

 
 
By Gary Bolles  |  Posted 2002-08-19 Email Print this article Print
 
 
 
 
 
 
 


Streamlining Information

At Maimonides University Medical Center in Brooklyn, N.Y., a 750-bed hospital with more than $500 million in medical billings a year, the storage problem involved making the right information available to doctors—fast. "Our goal is to use technology as a focal point for improving healthcare and reducing errors," says Mark Moroses, senior director of technology services for the hospital.

Before Maimonides storage overhaul, medical records for, say, ambulatory care werent in the same place as the records for generalized patient care information. In fact, much of the data wasnt even digital. The result: Wasted time at the point of care as medical personnel tried to pull records together, and increased risk of error. Worse, system uptime hovered between 90 and 95 percent—far below the "five 9s," or 99.999 percent uptime considered acceptable in many businesses, and absolutely critical in a hospital.

To centralize its data, the hospital decided to uncouple the traditional links between applications, operating system and storage. And because Moroses didnt want the hospital to be limited to one vendors storage devices, he looked at using inexpensive industry-standard hardware. The result: a storage area network that combines IBM SSA hardware with DataCores SANSymphony. Maimonides brought all of its records for ambulatory care into one place, speeding the ability of medical personnel to search and update patient information.

The payoff to the hospital for the effort so far can be measured in one simple metric: uptime. "Since weve been up, there have been four hard drive failures," says Moroses, "but we had zero downtime because of it." In fact, uptime for the past year has been over 99 percent, "a pretty huge increase for a hospital."

Real-Time Data

Edward Jones, a $2 billion financial services subsidiary of The Jones Financial Companies, had a different storage headache. EJs biggest concern is keeping always-on links to the firms 8,500 branch offices in three countries—to be able to handle some 25 million to 30 million transactions a day. "Our product is information," says Rich Malone, Edward Jones CIO. "For us, the information about our customers and our products is crucial to our business. The only profit center is our branches."

Edward Jones moves and stores a massive amount of data on its IBM systems—up to 120 terabytes, at last count—and that amount has been increasing at a rate of 50 percent to 75 percent a year for the past several years. Since 1987, the company has used a nationwide satellite network to move data between the companys branch offices and its main data center in St. Louis. But satellite links have one notorious Achilles heel: clouds. Heavy storm activity can knock out a data connection, sometimes for hours at a time. "We had a lot of inclement weather that caused outages in the past," says Larry Steele, Edward Jones chief technology officer. "If there was a storm in Paducah, Ky., wed lose one office. If it was in St. Louis, wed lose all the branches."

Four years ago, Malone began designing a redundant data center for his company that could remove some of the vulnerabilities of the old architecture. In October 2001, the company opened the center, in Tempe, Ariz. The site, which triples the companys information storage capacity, is equipped with a complete backup computing system that maintains storage from a number of vendors, including EMC and Network Appliance. "Our goal here is to get a fully redundant environment, where each data center could run the whole system at once" if it had to, says Malone. Ever since the site in Tempe went live, his team reports, there hasnt been a single major outage affecting the branches. Whenever heavy storms blanket St. Louis, the less-tempestuous Tempe facility takes over the centralized computing chores without a hiccup. To allow the data centers to work in lockstep, the company now replicates 25 terabytes of data over redundant, high-speed OC3 communications lines between the two centers, and a good portion of that is done continuously. "Were working it down to as close to real-time as possible," reports Hayden.

Centralization has also helped the Aviation Weather Center to solve its reliability problems. By installing a centralized storage server from Auspex, Wallace was able to use industry-standard disks that can be easily added as storage grows. The result: zero downtime and a flexible storage service that can scale as the tide of data rises.

Whats In Store?

As companies face the need to store and move more and more data—AWCs Wallace, for example, predicts the weather center will have to handle some 100 gigabytes of data per day in a few years, compared with todays 13 gigs—some companies are moving to software that turns storage into a centrally managed resource.

Todays distributed storage creates inevitable business frictions. Workers who need data thats stored in separate systems are often frustrated in their efforts to gain rapid access to critical information, hampering business initiatives and slowing the pace of decision-making. Imagine a company trying to efficiently move products between dozens of warehouses rather than coordinating goods from a single location, and youll see the challenge.

These new approaches, loosely grouped under the phrase Storage Area Management, or SAM, use software to stitch together disparate storage devices so they can be used like one big virtual pool. The goal: more flexible businesses unencumbered by balkanized data. Rather than forcing users to look for a spreadsheet "needle" in a series of server "haystacks," SAM is intended to provide flexible software services to help users and applications easily store and locate files anywhere in the network.

This approach radically increases flexibility, but it potentially creates new frictions as workgroups are required to toss their storage resources into the pool. It also means that IT needs to learn a new skillset, tracking and managing storage resources to create a rock-solid utility that users and applications can rely on without thinking.



 
 
 
 
Gary Bolles Gary A. Bolles is the Editorial Director for Ziff Davis Media's Custom Conference Group. He is responsible for directing the group's editorial efforts, ensuring the quality of the content it delivers, and moderating and speaking at client events. A frequent lecturer and keynote speaker on a variety of technology topics, he has hosted more than 50 events in the past year alone.

Bolles was the founding Editor-in-Chief of Interactive Week, developing its unique vision, the founding editorial director of Sm@rt Reseller magazine, creating the publication from initial research, and the founding Editorial Director of Yahoo! Internet Life, managing its successful launch. Bolles was also the Editor-in-Chief of Network Computing Magazine, and for one year was the host of 'Working the Web' for TechTV, covering a wide variety of technology-related topics. Until recently, he was a contributing editor to CIO Insight, writing on a broad range of technology subjects, and assisting in the coordination of the publication's research efforts.

Bolles is the former Chief Operating Officer of Evolve Software, Inc., and the former VP of Marketing for Network Products Corporation. He has served as a marketing consultant to a variety of organizations, and has advised a number of software startup companies in arenas such as online marketing and data mining.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel