When hurricane Rita ripped through Houston in September 2005, it forced Gary Bailey, director of IT for Penn Virginia, to quickly take notice.
What if a large-scale weather event like Rita forced the company to shut down its Houston offices for an extended period of time, Bailey wondered? Would the company be able to recover its backup data and set up seamlessly in another location without losing significant time?
Penn Virginia, an oil and gas exploration company based in Radner, Pa., with offices in Houston, Dallas and Kingsport, Tenn., deals in large amounts of data, as much as 2TB to 3TB. And, despite that Bailey backs up data religiously, he suspected it might take too long to get workers set up in another office while restoring large amounts of data from a tape backup in the event of a natural disaster such as a Category 5 hurricane.
It was at this point that Bailey decided to look for a new, more modern storage solution, one he would eventually purchase from Isilon Systems, of Seattle.
Penn Virginia purchases raw geological data on CDs or DLT (digital linear tape), which can vary in size from 4GB up to as much as 60GB, according to Bailey.
Twelve geologists and geophysicists in the Houston office analyze this data to determine the most likely location of oil and gas deposits. They use the Kingdom Seismic Software Suite from Seismic Micro-Technology, also in Houston, to help them find this information.
The resulting project files are often between 40GB and 60GB, and each project file comprises hundreds of sub-files, Bailey said. These files were being stored on Hewlett-Packards HP StorageWorks 1000 Modular Smart Array with two 1.5TB volumes and then backed up to tape every week using Iron Mountain off-site storage.
“We have two to three terabytes of data, somewhere in that area code, and it gets to be pretty cumbersome to back up,” Bailey said. After Hurricane Rita, he began to take a closer look at the backup systems he had in place, he said. Even though his department does daily backups, he was concerned about what a major hurricane like Rita could do.
With that in mind, Bailey said that he began to look for a high-performance solution that would enable him to replicate the data in the Houston and Kingsport locations.
For the previous two years, Bailey said that he had been working with Royce Landman, founder of RCL Systems, a five-person consulting company located in the Houston suburb of Stafford. Landman has more than 15 years of experience working in the oil and gas industry, providing support for SMT software, and in designing custom, high-end workstations to run SMT software.
Landman had been working with Penn Virginia scientists building workstations and providing SMT software support. Bailey explained that during a conversation with Landman in December 2005, Landman suggested that Bailey take a look at Isilon for his storage solution issue.
Isilon uses a cluster-based storage solution—the grouping or “clustering” of storage nodes (industry-standard boxes of hardware including Hitachi or Maxtor drives and Gigabit Ethernet or InfiniBand back-end switches)—to produce a single, shared pool of data that can be scaled to meet capacity and performance needs simply by adding or removing nodes.
SAN and Throughput Problems
“We discovered Isilon at the SEG [The Society of Exploration Geophysicists] trade show and looked at their product offerings. [We thought] they had some scalable technology that might be ripe for some of our customers,” Landman said.
Around this time, Landman said he also was hearing performance complaints from scientists at Penn Virginia. When he began to analyze the network, and after looking at software and anti-virus issues, he said he decided to take a closer look at the SAN (storage area network) and discovered it was part of the problem.
Landman explained the scientists had these large files sitting on the workstation, and, over time, the transfer rate between the workstation and the SAN had deteriorated.
“We started tweaking and looked harder at storage … and we started pounding on that to see if we [could] improve the throughput,” Landman said. He was hoping the Isilon system might solve this issue, too. “[With Isilon], we can get replication work in, we can use this as disaster recovery, and, if we get a performance boost out of it, [that would be] great,” he said.
Landman contacted Isilon, and they agreed to put a test system in place at the Penn Virginia Houston office.
“Isilon came in with the point of view that we have a lot of customers accessing large video files, and [moving large amounts of data] is something that we do well,” Landman said.
Landman said he believed the Isilon solution would resolve the replication issue, but he wanted to bring in a test system to see if he would also get improved performance over the existing HP solution.
Penn Virginia purchased two Isilon 1920i three-node clusters and placed one in Kingsport and one in Houston. Each cluster boasts 5.7TB of raw capacity with 12 Hitachi or Maxtor hard drives running on each node, as well as 12GB of memory in a 10.5-inch cluster. These clusters use InfiniBand switches for intra-cluster communication.
Bailey put Landman in charge of the implementation, which took several weeks. One of the implementation challenges Landman faced was a communication issue involving several domain controllers.
Landman said he realized these controllers could not be reached by the cluster over the company VPN line. He discovered they were using a class of IP addresses not allowed by Qwest Communications, operators of the VPN line. After identifying the problem, he asked Qwest to allow these IP addresses, and Qwest agreed.
Bailey said the system has been in place since spring, and, after Landman resolved a few implementation glitches, it has run smoothly. And Penn Virginia not only got the disaster recovery and replication capability it was looking for, Landman reports, but scientists also have been seeing a performance boost of between 10 and 20 percent with the new system. The extent of the boost depends on the activity they are doing with the SMT software.
Bailey said he has not measured return on investment on this project in terms of actual dollars, so much as in peace of mind, knowing that if something goes wrong, he is confident he can move his scientists to the Kingsport office and have them up and running with very little pain.
“With the threat of disaster in the Houston area with hurricanes weve had in the last year or so, you just never know when you can have a long-term outage that can impact the company,” Bailey said. He also said he wonders how much the company could lose if his office were down for a month.
“We decided we didnt want to take that risk. We know we will get payback. We hope we never have a disaster, but we feel more comfortable that all of our data assets are protected,” Bailey said.
Having Landman on his side was invaluable, Bailey said. “He introduced us to Isilon. Its highly possible if Royce hadnt been here as a consultant, we might not have looked at that as a solution.”
Bailey added that when making a purchase, there is something to be said for taking a calculated risk.
“Theres nothing that will be achieved without risk. I find so many people who will not take a risk for fear of being fired or whatever. It just goes back to No risk, no reward.” In the end, he said Isilon rewarded him with the system he needed.
Ron Miller is a freelance writer in Amherst, Mass. Contact him at [email protected].
Case File: Penn Virginia
Customer: Penn Virginia, an oil and gas exploration company in Radner, Pa.
Business problem: Concern about the ability to replicate data and then restore back-ups quickly in the event of a natural disaster
Technology partners: RCL Systems, in Stafford, Texas, as integrator; Isilon Systems, of Seattle
Recommended solution: Installed two Isilon 1920i three-node storage server clusters in the Houston and Kingsport, Tenn., offices
Return on investment: Scientists can more efficiently handle any type of disaster and be up and running with minimal delay
Have a comment or suggestion?
Please e-mail Solutions Series Associate Editor David Weldon at [email protected].