A map of the Gulf of Mexico covers an entire wall of geophysicist John Potters office at Amerada Hess Corp.s Houston-based R&D lab. It is as complex as it is large, charting thousands of square miles of underwater terrain. Hand-drawn concentric circles and lines separate, into hundreds of square lease units, thousands of miles of ancient rock formations and subterranean cliffs available for offshore oil exploration. A much more detailed map, this one digital, is stored on Potters desktop computer. Created from sound waves and complex mathematical algorithms, it measures the density and composition of bedrock located miles beneath the Gulf floor that dates back, in some cases, to the days when dinosaurs walked the earth.
But whats most striking about Potters map and the computerized one that he and his colleagues in Hess Geophysical Group use is not so much the age of the underwater landscape but Hess detailed knowledge of it. Potters maps include precise measurements not only of the thickness of a rock layer in one area versus another, but also 3-D images viewable from all angles to look for clues about any undiscovered oil that might lie within.
Such maps would not have existed even five years ago: By necessity, oil exploration has been a high-stakes guessing game of the highest order. Supercomputers, a relatively recent phenomenon, have helped fine-tune the analysis, but at a hefty price, which has limited their use by some companies and made it cost-prohibitive to conduct sustained, ongoing number-crunching and digital depth analysis.
But times are changing. Thanks to the emergence of low-cost computing power in the form of Linux computing clusters, Hess and other oil companies now can run algorithms they never could have dreamed of running before. At far more affordable prices, Hess can now extract terabytes more information about what lies beneath the earths surface than it could with a supercomputer. Indeed, Linux cluster technology has “dropped our cost of computing by an order of magnitude of two,” says Hess CIO Richard Ross. “One of our guys wrote a program that allows him to interactively work with multiple terabytes of data. All the books stored in the Library of Congress would equal 20 terabytes. Just think about working with that much information in real time.”
Better yet, Hess Linux clusters have nearly doubled in power each year since 1998, enabling Hess engineers to process data more often and in a wider variety of formats. Ultimately, this gives Hess executives a continuously improving stream of information with which to make crucial decisions about which oil fields to lease, where to drill and how much money to bid for a particular field. The difference is like night and day: CIO Ross says todays seismic images put Hess old maps to shame. “Its like the difference between looking at a low-resolution image, where you can barely make out two human figures, and a high-resolution image, where you realize theres a man and woman holding flowers and candy,” he says. “Because we can now process more data going into our bids for oil leases, our risk of doing something stupid is lower.”
No Margin for Error
No Margin for Error
Doing something stupid in this business isnt just embarrassing—its expensive. Each oil well around the globe costs tens of millions of dollars to lease. Producing fast, high-quality seismic images can significantly reduce Hess risk of drilling a “dry hole”—which, in deep water especially, wastes between $12 million and $25 million, says Vic Forsyth, Hess manager of geophysical and exploration systems. “We could not have been running the same level of technology on a supercomputer because of the cost,” he says. “Sure, we could have run the same code on supercomputers but we couldnt afford to buy the machine. Linux clusters allow us to implement the science we need to reduce risk. Linux changes things.”
Hess engineers and geophysicists need all the help they can get. The $12 billion independent oil and gas company, small by industry standards, is being buffeted by dwindling, finite resources and ever-growing demand for oil—a devils bargain facing all oil companies as untapped reserves become harder to find.
But Hess also faces some unique challenges: At $12.19 per barrel, Hess average cost to find and develop oil wells over the past three years has been nearly twice the $7-per-barrel industry average. Indeed, says oil industry analyst Fadel Gheit, senior vice president of oil and gas research for Fahnestock & Co., Hess F&D costs are now highest of all 30 rivals in the industry, and have been out of whack for a while. “Hess lacks for a reason,” says Gheit, “and the biggest problem Hess has is in exploration. The lifeblood of a typical oil company is its ability to replace reserves at reasonable cost, yet Hess has one of the highest reserve finding and development costs in the industry.”
Hess cant create oil, of course; it must discover it—or buy it from someone else. In the past, Hess placed most of its bets on drilling, but wasnt wildly successful at it, prompting executives to shift gears and instead unleash an aggressive acquisition strategy to compensate for what it hadnt been finding underground. In 1995, Hess had either already dug or was in the process of drilling 3,314 wells, compared with 997 today.
But even the new strategy has had its problems. In the third quarter of last year, for example, Hess surprised investors when it announced it would write off $256 million from a drilling deal that went bad. Hess had paid $750 million for what it hoped would amount to 360 Bcfe of reserves from Louisiana independent LLOG Exploration. But Hess later ratcheted down the expectation, to 95 Bcfe, acknowledging it had overestimated the size of the acquisition by 265 Bcfe. According to analyst Mark Flannery at Credit Suisse First Boston, the mistake by “accident-prone” dealmaker Hess “calls into question Hess judgment in this acquisition.” CEO John Hess told investors, “The reserves watered out sooner than we expected.” As a result, he said, 2003 production would be 9 percent lower than projected.
Beyond having a middling track record for drilling and acquiring the right wells for the right price, Hess also finds itself locked into a number of unprofitable contracts struck years ago by executives who have since moved on, Forsyth says. “Right now, for example, we have a long-term rig commitment that were paying for through the balance of this year,” he says. “Were not even using the rig because it was a commitment made three years ago by a vice president whos not even here anymore; the commitment is for over hundreds of thousands of dollars a day.” The rig is now “off drilling somebody elses well, a company thats subleasing it from us,” Forsyth adds. “It doesnt take very many such decisions to way overwhelm anything I could ever save on a Linux cluster.” Says analyst Gheit: “Companies make mistakes, but Amerada has made more than its share.”
Investors have noticed. The companys stock fell from a high of $82.52 last July to a low of $41.14 a share in March. Hess stock closed at $50.26 on June 25. In the first three months of this year, Hess earned $176 million on revenues of $4.3 billion, compared with $141 million on $3 billion in the same quarter a year ago. But analyst Gheit says Hess, when it comes to cost-cutting, “needs to go from being a D student to being a B student to reassure investors.”
Economizing, therefore, was one big reason the company was drawn to Linux clusters in the first place. According to CIO Ross, substituting Linux clusters—now totaling 400 dual-processor PCs running on Red Hat Linux—for its old IBM SP2 supercomputer has saved the company an estimated $14 million in technology costs over the past four years, and thats not even counting the millions saved when the system steers Hess away from a dry well it might otherwise have drilled.
And while Linux clusters increase MIPS, they dont boost manpower. Shifting to Linux has allowed the Geophysical Group to use seven instead of eight programmers and one fewer staffer to handle five or six overlapping exploration projects at once; before, it had to do one at a time. “Were a small competitor in a commodity market,” says Jeff Davis, technical lead, global IT infrastructure at Hess. “One of the main ways we differentiate ourselves is cost. If I can spend $100,000 [for a 32-node Linux cluster] instead of $1.5 million, then its a no-brainer.”
Hess is not the only company trying to harness the power of Linux clusters to improve business performance. Clusters are proving well-suited to the high processing demands of other information-intensive tasks like movie studio animation, automobile crash simulations, aerospace design and weather forecasting, and companies from DaimlerChrysler Corp. to Pixar Animation Studios are using them to boost results. And interest is likely to keep rising. Market researcher Gartner predicts that within three years, 20 percent of new server shipments will be installed in clusters. Likewise, researchers at IDC project the high-performance cluster computing market alone will more than triple in sales to $1.6 billion in 2006, up from $494 million in 2001.
How does Hess put Linux to work? Forsyth and his crew generate data in much the same way that a bat comes up with a mental image of its cave: by bouncing echoes off surfaces. Hess contracts with third parties to collect this data, in boats crawling across miles of deep waters in the Gulf of Mexico and the North Sea, and off the coasts of West Africa and Southeast Asia. Then the Linux clusters, running complex algorithms developed by Hess IT and engineering experts, turn those sound waves into 3-D images called depth migrations. These “pretty pictures,” as Potter likes to call them, show underwater surfaces, such as salt domes, beneath which oil might lodge.
Potter explains that a single depth migration typically comprises 20 nine-square-mile “blocks” of land and substrata under water. To do one depth migration takes a Linux cluster of 32 nodes about three months from start to finish—i.e., from receiving the sound wave tapes to having an onscreen picture. Then, other experts within Hess use the pictures to figure out which blocks are most likely to contain oil and gas, and how easy it might be to extract, then estimate a logical bidding price for the rights to lease the land. (Oil companies must place bids with the appropriate governments to win the right to drill in offshore locations.) In the Gulf of Mexico, says Forsyth, “we might lease 15 to 20 percent of the blocks we image” and “find oil on roughly a third of the blocks we lease.” It costs anywhere from $200,000 to $15 million to lease a block—depending on its location and promise of oil—and the payoff might be zero, or it might be millions of barrels.
Forsyth says various Gulf deposits have yielded up to 2 billion barrels of oil. Once Hess has leased a block in an area deemed to be promising, the company will drill an exploratory well, then expand both the drilling and the number of blocks it leases if oil is found.
The Hess Geophysical Group didnt start off bullish on Linux. In 1998, when a companywide push to cut costs first moved Forsyths crew to consider Linux, Hess was one of the first to look seriously at the then-untried operating system, which was viewed at the time as more of a rogue movement than a viable alternative to Microsoft and other proprietary software system makers. According to Davis, there was a lot of initial unease over the idea of entrusting Hess most precious cache of information to Linux. The IBM SP2 supercomputer that Hess had been leasing at the time, similar to the chess-playing Deep Blue, had a tried-and-true track record. “It was the best machine I ever worked with in terms of reliability and performance,” Davis recalls. “Linux is not at the same level. You have different expectations of a Mercedes and a Volkswagen.”
But that year, cost pressures took priority. Hess bottom line took a hit as oil prices dropped to one of the lowest points of the 1990s. At $2 million per year on a three-year lease, the supercomputer was an expense the Houston lab decided it had to question—especially since Davis and crew would need a second supercomputer to meet the improved performance levels that Hess needed to move forward. It would be a tough call: Hess needed a system that would perform tasks in the same amount of time as the supercomputer—but for a lot less money. And Hess couldnt compromise on reliability; its system would be performing 3-D seismic depth imaging on areas covering hundreds of square miles, a task that would require intense levels of complex number-crunching.
Just then, Scott Morton, a former oil industry expert for computer maker Silicon Graphics Inc., joined the Hess lab in Houston as a senior professional geophysical specialist, and it was Morton who finally convinced the group there was an alternative. At SGI, which offered high-powered Unix workstations and supercomputers (think special effects for Jurassic Park), Morton had watched as SGIs customers began shifting from Unix to the newly emerging Linux. For example, Morton recalls, the national defense labs “had already proved that parallel processing could be done very well”—and for far less money—on clusters of PCs rather than a single massive supercomputer. He believed Hess use of the supercomputer also could be transferred to parallel processing on PCs running Linux.
At most companies, such a radical change in hardware and software might take years to accomplish and require approval from a management committee. But Hess Geophysical Group, a small, tightly knit coterie of eight engineers and IT specialists, was able to move swiftly and autonomously: Hess headquarters in New York traditionally had let its Houston R&D staff noodle as it pleased, as long as it met budget targets. “It was a local decision to experiment with Linux,” says Ross. “They let me know what they were doing, but they didnt ask for my approval.”
So Morton and some of Hess IT experts, including Davis, began benchmarking Linux against the existing IBM SP2 supercomputer. There were, once more, worries. At first, “we were concerned about the lack of support on the hardware side,” Morton recalls. PC suppliers were accustomed to a Microsoft environment, not Linux. Linux though, proved a worthy alternative to the SP2. “In some cases the benchmarking results were the same; in other cases, the cluster was slower or faster,” says Davis. On average, the Linux cluster and the supercomputer offered about the same processing speed. “That gave us a good idea of what we could do to replace the SP2,” Morton says. With about eight months left before Hess lease of the SP2 was set to expire, the group decided it would gradually shift from the supercomputer to Linux clusters, buying one 32-node Linux cluster and then another as the new system proved itself.
Oil and Water
Oil and Water
There were stumbling blocks. At Hess, the problem wasnt Linux itself, but whether or not some of the third-party applications based on proprietary software that it had been using could be made to work with the new open-source operating system. Forsyth, who has been with Hess for 18 years, says Hess seismic processing software originally was written by a software vendor to run on VAX systems in the mid-1980s. But as Hess went through several hardware iterations—from VAX to an IBM mainframe, and then to a succession of Unix machines, including the IBM supercomputer—this became problematic, and so Hess began writing as much of its own seismic applications as possible to open standards like Unix.
Still, some off-the-shelf applications were sticking points. Early in Hess switchover to Linux, for example, Hess had to nudge Scientific Computer Associates (SCA) to reprogram its parallelization software, Linda, for the new operating system. Linda enables all the machines in the grid to work together, but it didnt work very well on Linux at first. “We were debugging their [Linda] code as well as ours,” recalls Gary Donathan, geoscience systems consultant for Hess, who writes the algorithms used to assemble the depth migrations. Along with the debugging came hardware glitches with the new PCs in the Linux cluster. “We were finding hardware problems, like network cards that werent reliable, and trying to check out Linda,” Donathan says. “Sometimes it would be tough to figure out if it was a hardware or Linda problem.” Morton remembers asking SCA for a Linux version of Linda. “They told me in two weeks theyd have a copy for me. It was literally fresh off the keyboard.”
Another challenge: Hess uses a suite of GeoQuest software from Schlumberger Information Solutions to help its engineers understand potential oil reservoirs and how they might be drilled. Schlumberger has been slow to bring out a test version of the GeoQuest software on Linux. Right now, it runs on Sun Microsystems Solaris OS. The software includes too many different programs and is used by too many engineers for Hess to develop its own version, according to Forsyth. “We asked Schlumberger two or three years ago to port to Linux, and were still waiting,” he says. Schlumberger spokeswoman Carolyn Turner says the company launched a beta version of GeoFrame, the primary component of the GeoQuest suite, at an oil and gas trade conference in May and plans to roll it out to clients this fall.
In the end, though, Hess transition to Linux went smoothly. Eventually, Hess migrated to open-source parallelization software called MPI, another example of how open source allows more flexibility than proprietary software offered by vendors that have no particular incentive to offer their products on a free operating system. “We had run on enough different flavors of Unix that all our code was generic, so it wasnt hard to move to Linux,” Donathan says. He has no regrets about the shift. “For the cost savings alone, it was worth doing.” Besides, now hes got Linux running on his Pentium laptop so he can debug software at home. He couldnt do that with the supercomputer. “Were doing more with less,” he says.
The Future of Linux
The Future of Linux
But while Linux has been successful at the server cluster level, dont expect to see it on many Hess desktops—at least not yet. Sure, Potters desktop runs on Linux, so he can look at 3-D depth migrations. Hes also got a Sun workstation to run GeoQuest and estimates that about 10 percent of what were once Solaris desktop machines run Linux. As more third-party applications become available on Linux, he says, that figure might grow to 30 percent in a couple of years.
CIO Ross expects the migration from Unix to Linux at Hess to continue over the next several years, but he, too, believes the vast majority of desktop machines will remain on Windows. The investment in Windows PCs is largely a sunk cost, Ross says, and users are accustomed to them. Linux will, however, make inroads at the server level, displacing NT and Unix.
Hess approach dovetails with industry trends. A recent report from the Forrester Group entitled “The Linux Tipping Point” finds that 72 percent of 50 large companies ($1 billion-plus) surveyed intend to use more Linux in 2004, and about a fourth of those are replacing Windows servers with Linux servers. Thirteen of the 50 respondents were using Linux on the desktop or on workstations. “Proprietary Unix is stone-cold dead,” says Ted Schadler, who wrote the report for Forrester. Linux is “good enough” for most jobs, he says, delivering the same workloads as, say, Solaris on Sun servers at a fraction of the cost.
Further, Red Hat Inc., the open source operating systems company, has introduced an enterprise version of its software that is more robust than the free, downloadable code and maintains compatibility as older versions are replaced, says Paul Cormier, executive vice president of engineering for Red Hat. “Previously, no thought was given to continuity,” he acknowledges, but as Linux use increases across hundreds or thousands of machines, upgrades will become difficult without such compatibility. “Linux is growing up,” Cormier says.
But will third-party support improve, and can Linux remain united instead of fracturing into various flavors like Unix did two decades ago? AT&Ts Bell Laboratories, which developed Unix, liberally licensed it to other users. With no central body governing development, companies like Sun, IBM Corp., Hewlett-Packard Co. and others developed proprietary versions. “What is to prevent the same thing happening to Linux?” Schadler asks.
Back in New York, Hess CIO Ross shares those questions, and says he doesnt want to go down the same road as many Unix proponents, whose motto often seems to be Anything But Microsoft. “In many ways, the whole vendor side of Linux is promulgated upon dislike of Microsoft, which I find to be a flimsy reason,” Ross says. “Linux is attractive because its free, but whats the real technical differentiator?” To be attractive for broader use, Linux “has got to come up with something better than that,” Ross adds. Indeed, for Hess engineering applications, the rationale for moving to Linux was not “lets get rid of Microsoft,” but “lets get rid of IBM and Sun,” Ross says. As proprietary systems, “they were much more expensive while Linux [as open source] gives us much more control over programming. Thats the ultimate differentiator.”
To be sure, Hess Houston lab isnt complaining. When it comes to gathering and analyzing oil exploration information, Linux is helping Hess create a whole new kind of gusher—one it hopes will be able to spew savings for many years to come.
Karen Southwick is a San Francisco-based freelance writer, Debra DAgostino is a staff reporter for CIO Insight and Marcia Stepanek is the magazines Executive Editor. Please send comments and questions on this story to [email protected].
“Supercomputers for the Masses?”
By John Taschek
eWEEK, June 9, 2003
“The Future of Linux and Open Source”
By George Weiss
Gartner Inc., 2001
“The Linux Tipping Point”
By Ted Schadler
Forrester Research Inc., March 2003