SAN FRANCISCO—True to form, Oracle Corp. Chairman and CEO Larry Ellison began his keynote at OracleWorld here on Tuesday by ridiculing Microsoft Corp.
Forty years ago, Ellison said, IBM Corp. invented the 360 mainframe. Since then, the race has been on to build bigger and bigger servers. After 40 years, the joke goes, Microsoft Corp. has finally joined the race.
“I think what happened was Bill [Gates] sent a team of people out to IBM to figure out what was new,” Ellison said. “Somehow the Microsoft intelligence-gathering team held their Yahoo map upside down. Instead of making a left turn into IBM Research, they made a right turn into IBMs museum.”
The point was to compare Microsofts recently heralded 64-processor Windows mainframe benchmark with Oracles own push, with grid computing, to move enterprises onto pools of two- or four-CPU servers running Linux on Intel processors—a commodity hardware setup that represents a 30-to-1 price differential compared with mainframes, Ellison said. Microsoft in the spring released benchmark results running SQL Server 2000 Edition on a Hewlett-Packard Co. 64-processor Superdome. The Redmond, Wash., company also released the 64-bit version of SQL Server around the same time.
Getting bigger is no longer the right strategy, Ellison said, for a number of reasons: limited growth potential, expense and the problem of having a single point of failure. “There are problems with the 40-year-old architecture of buying larger and larger server computers,” he said. “With your one big server that runs your critical applications, if you need more capacity, there is none. This is the largest machine they build. Theres no place to go with a single machine architecture. Plus, applications are beginning to outgrow even the largest computers. …
“Next, theyre very expensive,” Ellison said. “They cost millions and millions of dollars. And when the next one comes out, you have to throw the old one away and buy the next one. And maybe the worst of the Achilles heels with single big servers is that theyre a single point of failure. If it goes down, all applications go down, all the users go down, and everything stops. They can only be so reliable.”
The pricing for Oracles trumpeted grid technology, meanwhile, will be released next week, Ellison said. And pricing was definitely on attendees minds: The first question following the keynote was framed in the same “buy a bunch of inexpensive things for better performance” terms as Oracles call to run on commodity servers: With grid, can users buy many inexpensive Oracle licenses so they run on more licenses for less money?
The audience laughed, while Ellison took the opportunity once again to drive home the notion that grid computing will better utilize existing resources. If you can employ Oracle grid computing to use CPUs to run Oracle applications more efficiently, you dont need to buy more licenses, he said.
Next page: Grid wont change Oracle pricing, licensing.
: Grid Wont Change Pricing, Licensing”>
In a separate interview with eWEEK, Jacqueline Woods, Oracle vice president of global pricing and licensing strategy, said the Redwood Shores, Calif., companys pricing and licensing strategy isnt changing, in spite of the fact that companies will, with grid, gain the ability easily to add servers and users to their existing pools of resources.
“Typically when people have large server farms, you could have one or two boxes that are, say, 32-nodes, so you have 62 CPUs,” she said. “Thats no different from 16 four-way nodes.
“We want to be clear that for named-user licensing, if you have 128 CPUs and 2,000 named users, even if you took those two machines and broke those into smaller nodes, … you have smaller, cheaper systems; there wont be any change to licensing fees. You havent changed users or uses. The only way youd pay more is if you were to increase CPUs or named users. Then youd buy more licenses, same as today.”
During his keynote, Ellison went on to outline Oracles grid architecture as having four components: information stored in a storage grid, whether it be network-attached storage, lots of disk drives or SANs, which would be attached in turn to an array of database servers connected to a collection of application servers; the entire system would be managed by a control system announced on Monday called Grid Control.
From the point of view of a DBA, a computer user and the application program itself, that vast array of disk drives will appear to be one big computer, Ellison said, and it will be managed as if it were one computer. The 10G grid architecture will run all existing Oracle applications, along with PeopleSoft Inc., SAP AG and Siebel Systems Inc. applications, without requiring changes their existing application code, he said—and those applications will run faster, more reliably and less expensively on grid.
This vision of on-demand computing differs from that of vendors such as IBM, Ellison said; customers will purchase more $5,000 servers as needed to add to the computing power of the grid, instead of turning on and paying for CPUs in a machine from IBM.
Getting to this point has meant creating an illusion that all these machines are one machine, Ellison said. That illusion must be constructed for several constituencies, including for the IT workers in a data center who are suddenly running 100 or 200 two-CPU servers, as opposed to five large servers. It has to be as easy for those administrators to install, patch and upgrade software on hundreds of machines as on a handful—otherwise, labor costs would outweigh hardware savings.
Thats where Grid Control comes in. Grid Control allows DBAs to patch one server and then to clone it across others in a pool. The software will also tell users which servers are configured differently. In addition, policies dictate known configuration standards, which can be enforced across an array.
When pressed for a release date for Oracle Database 10G, Ellison said it would ship this year.
Discuss this in the eWEEK forum.