Oracle Now Caching More Data, BI in Storage Arrays

 
 
By Chris Preimesberger  |  Posted 2010-09-20 Email Print this article Print
 
 
 
 
 
 
 

Oracle Executive Vice President John Fowler tells OOworld attendees that having in-house engineers working together on storage and databases has made a big difference in new-product performance.

SAN FRANCISCO--For the first time, Oracle OpenWorld is about much more than faster databases, middleware, enterprise apps, Linux and the America's Cup.

Those items alone certainly are worthy of a huge conference like this one, staged again at cavernous Moscone Center. But now, with the addition of Sun Microsystems in the Oracle fold, there's quite a bit more to talk about--storage being one of the key new topics.

In the acquisition, Oracle inherited not only Sun's powerful home-grown storage arrays but also those of its StorageTek division, which came with a relatively bustling tape storage franchise.

Oracle Executive Vice President of Systems John Fowler, who led development of much of that hardware during his years at Sun, told OpenWorld attendees Sept. 20 during the morning keynote that being able to have in-house engineers working together on storage and databases has made a big difference in new-product performance.

"For OLTP [online transaction processing]--and you're heard a lot about Exadata [Oracle's database storage server] in the past--we've taken and engineered elements of  database logic, in particular caching SQL queries, and actually put these in the storage servers, coupled with a large amount of flash and DRAM," Fowler said.

"These are not ordinary pieces of storage. Instead of the regular interface--give me a block, send a block, gimme a block back--a lot of information is actually cached on the storage server. Only the ability to co-engineer the storage and the database made this possible."

The result is more than 1 million SQL I/O operations per second, Fowler said.

"There is simply no faster way to get OLTP transactions on the Oracle database than to couple with storage that's actually designed to use the database," Fowler said.

Moving more logic to storage device

On the business intelligence/data warehousing side, Oracle has moved scan logic to the storage server, because data warehousing is fundamentally about throughput, Fowler said.

"Again, instead of moving all of the data for a logical semantic compare to the database engine, we actually have moved logic into the database server so that it actually happens down there," Fowler said.

"You can process enormous amounts of data using the parallel processing of however many horizontally scaled storage servers you have. You then can aggregate and send small amounts of requests back to the server."

Basically, the way Oracle is doing high-end business intelligence in OLTP is that "we're moving logic by breaking down the barriers from the database into the storage server, therefore creating a higher-performance alternative while the upper layers of the stack remain the same," Fowler said.

"In this environment, this gives you tremendous performance."

Regarding petabyte-scale storage, to which enterprises are heading faster than many observers imagined, Fowler said that the IP Sun brought to Oracle is now coming to the fore.

"With the ZFS [Zettabyte File System]--that integrates flash with SATA and SAS disk drives--together with a high-performance architecture that includes built-in deduplication and compression as standard, plus truly unified connectivity to a range of applications--and plus Sun's No. 1 position in enterprise archive--we now have the capability to address petabyte-scale storage in an enormously cost-effective way," Fowler said.

Fowler said that Oracle this week will be announcing major upgrades in disk sizes; the doubling of bandwidth in controllers; increasing capacities on the board; and introducing a new "highly available packaging architecture that will integrate with the rest of our systems to take the disk side of the equation beyond the capability it's had in the past to increase the amount of flash, memory, bandwith and technical capacity of our storage."

 
 
 
 
Chris Preimesberger Chris Preimesberger was named Editor-in-Chief of Features & Analysis at eWEEK in November 2011. Previously he served eWEEK as Senior Writer, covering a range of IT sectors that include data center systems, cloud computing, storage, virtualization, green IT, e-discovery and IT governance. His blog, Storage Station, is considered a go-to information source. Chris won a national Folio Award for magazine writing in November 2011 for a cover story on Salesforce.com and CEO-founder Marc Benioff, and he has served as a judge for the SIIA Codie Awards since 2005. In previous IT journalism, Chris was a founding editor of both IT Manager's Journal and DevX.com and was managing editor of Software Development magazine. His diverse resume also includes: sportswriter for the Los Angeles Daily News, covering NCAA and NBA basketball, television critic for the Palo Alto Times Tribune, and Sports Information Director at Stanford University. He has served as a correspondent for The Associated Press, covering Stanford and NCAA tournament basketball, since 1983. He has covered a number of major events, including the 1984 Democratic National Convention, a Presidential press conference at the White House in 1993, the Emmy Awards (three times), two Rose Bowls, the Fiesta Bowl, several NCAA men's and women's basketball tournaments, a Formula One Grand Prix auto race, a heavyweight boxing championship bout (Ali vs. Spinks, 1978), and the 1985 Super Bowl. A 1975 graduate of Pepperdine University in Malibu, Calif., Chris has won more than a dozen regional and national awards for his work. He and his wife, Rebecca, have four children and reside in Redwood City, Calif.Follow on Twitter: editingwhiz
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel