IBM Takes Virtualization to DB2 Viper

 
 
By Brian Fonseca  |  Posted 2005-11-20
 
 
 

In an effort to join the move toward virtualization, IBM is building a software tool to enable customers to change data sources or rules that govern data without affecting applications.

Due for release in the first quarter of next year, the product, currently known as Information Virtualization Server, will be part of IBMs Information Management portfolio and will plug into the companys upgraded DB2 platform, code-named Viper.

Last week, IBM unveiled the public beta of Viper, which is also scheduled for release early next year. The new database features native XML data management and relational data capabilities.

The forthcoming server creates a virtualization layer that can be accessed via SQL queries through an SOA (service-oriented architecture), a messaging infrastructure or other interface mechanisms, said Ambuj Goyal, general manager of IBM Information Management, in Armonk, N.Y.

Information Virtualization Server offers a consistent services interface across transformation, quality, data access and other information services to drastically simplify information integration, Goyal said.

Click here to read more about Viper.

"[The new tool] can be embedded into any application. It allows the customer to access information from any application and customized application, so they can leverage Information Virtualization Server and not have to tie it to a database at the time you write the code," said Goyal.

"Data management is an extremely important thing—where is my information in the enterprise, and as it changes, do I know how and when? Managing the complexity of information is a key element of that," he said.

Michael Georgeff, principal of Precedence Research Institute and an IT professor at Monash University, in Clayton, Australia, has been using an early version of Information Virtualization Server technology to create a virtual repository linking multiple hospitals and medical research organizations.

Australian privacy laws dictate that patient data cannot leave a hospital. To conduct required research, Georgeff and his team are using Information Virtualization Server to move data across institutions without disrupting multiple silos of information.

"We used [the software] to create a virtual data store so we could see from a central place across four hospitals. Researchers could see information and queries, but the data was stored outside the original hospital," Georgeff said.

"It enabled us to share that information without violating privacy laws regarding information on a confidentiality basis, and it provided us with a very scalable solution so the next project stage we go into [connecting 10 more hospitals], we can incrementally add new hospitals just by replicating the same database, methodologies and components we already used," he said.

Georgeff used the tool to separate patients identifying information and clinical information and then provided each record with an encrypted unique subject identifier.

Due to the absence of a data warehouse, a query performed by a user in the federated databases would download the identified data onto a local machine.

The projects next phase will connect 10 more hospitals across Australia and eventually create a grid that can share cancer information created with components of Information Virtualization Server.

Georgeff said he and other researchers plan to expose IBMs Information Virtualization Server interface as a Web service to enable multiple applications to easily plug into the system.

"Were literally making discoveries almost immediately because we suddenly have access to all this amount of data," he said.

"The metadata turns out to be real crucial for the business users to understand the data or find data theyre interested in. Its also important for developers to develop complex data transformations from one model to another."

Check out eWEEK.coms for the latest database news, reviews and analysis.

Rocket Fuel