The database titans are at it again, duking it out this time in the realm of storage and backup.
For its part, IBM on April 6 told eWEEK in an interview that Viper, its upcoming update to its DB2 database server software, has been infused with technology that compresses data throughout the system, including on disk and in memory, thus resulting in a 54 percent savings on storage costs with no performance hit.
On the same day, Microsofts Paul Flessner, senior vice president of data and storage platforms, announced that SQL Server 2005 Service Pack 1—due at months end—will ship with the final version of database mirroring. Database mirroring is a long-anticipated feature—many listed it as their top pick on the long list of coming attractions in the next-generation SQL Server 2005—that didnt ship in a final, production-ready form when SQL Server 2005 first came out.
Database mirroring is similar to Oracle Data Guards physical standby feature, but seemingly better at failback than Data Guard, according to independent analyst and eWEEK contributing columnist Charlie Garry. “It is an improvement over basic log shipping because it streams the active log changes,” Garry wrote in an October column. “This could support both high availability or disaster recovery scenarios.”
In sketching the road map for Microsofts long-term data storage vision, Flessner hyped database mirroring as part and parcel of Microsofts attention to the cost of storage. In a SQL Server 2005 update letter sent out to promote his talk, he said storage costs continue to be on “an amazing trajectory.
“One could reasonably expect that the cost of a terabyte will be reduced from about $1,000 today to $100 in 2007,” he wrote. “Historical trends imply that in 20 years or less, we will be able to store literally everything digitally, and the petabyte will be a standard measure of personal storage.”
Perhaps storage will be cheaper, but managing that data wont necessarily drop in cost. Database mirroring will serve as one aspect of making sure businesses maintain high availability to access that data.
“As we move from managing transactional data to pre-transactional, as with RFID and other sensors, we will face a 10-100 factor increase in data volume,” Flessner wrote. “As an industry we are facing an unprecedented explosion of data that we will need to manage, with the ability to organize, summarize and prioritize all of this information becoming a key priority for IT.”
Meanwhile, Viper, IBMs hybrid XML/relational version of DB2, has picked up a software-based disk and memory storage capability within the Autonomic Computing aspects of DB2, according to Bob Picciano, vice president of data servers.
The new compression technology will save clients some 54 percent of the storage infrastructure and costs otherwise required for Viper and competing databases, Picciano told eWEEK in an interview following IBMs announcement that it was releasing a free “test drive” version of Viper.
“It took you a terabyte to store information in another database—and it will take you half a terabyte to put into DB2,” Picciano said.
Viper is picking up “little to no” transactional performance overhead because the new compression occurs in memory within internal structures of DB2, he said. That differs from competing databases, which have to go to disk to read information to put into memory. Keeping information in memory results in much faster access to data.
Thus, the new compression technology saves storage both on disk and in real memory, Picciano said, saving as much in storage costs—54 percent, with some clients reporting savings of up to 60 percent—as it does in storage space.
With the much-ballyhooed explosion of stored data, clients have been telling IBM that data storage and data management represent the largest chunk of their spending budgets, Picciano said.
IBM last summer tested the new compression technology with a large financial institution on Wall Street that Picciano declined to name. The financial institution was willing to accept 10 to 15 percent performance degradation, given managements eagerness to whittle down burdensome spending on storage, but they found that there was in fact no degradation, Picciano said, since IBM managed to compress data throughout the system.
“Thats exclusive—[no other database vendors have] that,” he said. “We expect it will increase peoples consideration of DB2.”
IBM is yet again pitching the breakthrough XML-handling capabilities of the newly released hybrid Viper, which can handle both pure XML data and traditional relational data capability in ways that have never before been seen, according to both IBM officials and early beta users.
“In the past, we really had to do a lot of work on our own” to enable storage of and access to XML data, said Lou Esposito, CIO of Stradasoft, which has a BPM (business process management) product called Strada into which the company is embedding Viper.
“Where before you could never search on all this XML data, now well have this stuff stored locally, right in the database,” said Esposito, in Staten Island, N.Y. “Thats an unbelievable capability.”
Esposito hasnt yet had a chance to road-test the new compression technology.
His appreciation of Vipers XML-handling capabilities is yet another example of how the database giants are playing on the exploding-data gambit, however. Vipers ability to avoid shredding XML to store in a relational structure or resorting to XML storage in large character objects—CLOBs or BLOBs—means that it can handle XML without the overhead associated with those methods.
Check out eWEEK.coms for the latest database news, reviews and analysis.