Oracle Executive Vice President John Fowler tells OOworld attendees that having in-house engineers working together on storage and databases has made a big difference in new-product performance.
SAN FRANCISCO--For the first time, Oracle OpenWorld is about much
more than faster databases, middleware, enterprise apps, Linux and the
Those items alone certainly are worthy of a huge conference like
this one, staged again at cavernous Moscone Center. But now, with the
addition of Sun Microsystems in the Oracle fold, there's quite a bit
more to talk about--storage being one of the key new topics.
In the acquisition, Oracle inherited not only Sun's powerful home-grown
storage arrays but also those of its StorageTek division, which came
with a relatively bustling tape storage franchise.
Oracle Executive Vice President of Systems John Fowler, who led
development of much of that hardware during his years at Sun, told
OpenWorld attendees Sept. 20 during the morning keynote that being able
to have in-house engineers working together on storage and databases
has made a big difference in new-product performance.
"For OLTP [online transaction processing]--and you're heard a lot about Exadata
[Oracle's database storage server] in the past--we've taken and
engineered elements of database logic, in particular caching SQL
queries, and actually put these in the storage servers, coupled with a
large amount of flash and DRAM," Fowler said.
"These are not ordinary pieces of storage. Instead of the regular
interface--give me a block, send a block, gimme a block back--a lot
of information is actually cached on the storage server. Only the
ability to co-engineer the storage and the database made this possible."
The result is more than 1 million SQL I/O operations per second, Fowler said.
"There is simply no faster way to get OLTP transactions on the Oracle
database than to couple with storage that's actually designed to use
the database," Fowler said.
Moving more logic to storage device
On the business intelligence/data warehousing side, Oracle has moved
scan logic to the storage server, because data warehousing is
fundamentally about throughput, Fowler said.
"Again, instead of moving all of the data for a logical semantic
compare to the database engine, we actually have moved logic into the
database server so that it actually happens down there," Fowler said.
"You can process enormous amounts of data using the parallel processing
of however many horizontally scaled storage servers you have. You then
can aggregate and send small amounts of requests back to the server."
Basically, the way Oracle is doing high-end business intelligence in
OLTP is that "we're moving logic by breaking down the barriers from the
database into the storage server, therefore creating a
higher-performance alternative while the upper layers of the stack
remain the same," Fowler said.
"In this environment, this gives you tremendous performance."
Regarding petabyte-scale storage, to which enterprises are heading
faster than many observers imagined, Fowler said that the IP Sun
brought to Oracle is now coming to the fore.
"With the ZFS [Zettabyte File System]--that integrates flash with
SATA and SAS disk drives--together with a high-performance
architecture that includes built-in deduplication and compression as
standard, plus truly unified connectivity to a range of
applications--and plus Sun's No. 1 position in enterprise archive--we
now have the
capability to address petabyte-scale storage in an enormously
cost-effective way," Fowler said.
Fowler said that Oracle this week will be announcing major upgrades in
disk sizes; the doubling of bandwidth in controllers; increasing
capacities on the board; and introducing a new "highly available
packaging architecture that will integrate with the rest of our systems
to take the disk side of the equation beyond the capability it's had in
the past to increase the amount of flash, memory, bandwith and
technical capacity of our storage."