Servers and On Demand

 
 
By eweek  |  Posted 2006-02-17 Email Print this article Print
 
 
 
 
 
 
 


Youre announcing today two new server products. How do those fit into the Information On Demand initiative?

On the information integration side we have our [new WebSphere Information Server] capability, focused on all different types of data, whether its a different form of numerical data, relational data, file data, classic information, textual information, access to text, being able to read, to manipulate text, to find data buried in text, being able to deal with voice, video.
Dealing with a wide range of data types. Being able to link to all different data types, being able to map them, to extract data from sources, to cleanse them, to scrub out duplicates and redundancies, all comes together around WebSphere server technology, where weve combined core capabilities weve acquired and built over the last five years: Venetica, CrossAccess, Ascential, [etc.]. Acquiring different elements necessary [to the Information on Demand initiative].
The second side of this has to do with manipulation and discovery and deeper reconciliation within context. Its search technology, thats what [WebSphere Content Discovery Server] is all about—being able to search various types of data. Then in sophisticated analytics, such as the entity analytics capability that deals with human relationships, that understands your record, my record, how they might relate to other records, whos who and are we who we say we are. IBM launches a $1B information management effort. Click here to read more.
In this current world of identity theft, and where human identity tends to move around, it becomes a critical issue to all sorts of applications. Youre talking up a lot of initiatives in a lot of industries, but when are we going to see packaged solutions come out of all this investment? As we see the problem, this is less about core technology and more about applying it in a business context easily and quickly. Projects being discussed today, Id characterize them as the beginning of projects. In some cases theyre first of kind initiatives to understand what is a problem and how do we apply technology to solve that problem. In financial services, we do sophisticated time-based analysis, complex analytics that deal with derivative understanding. How do you create new financial instruments? How do you model markets? Were beginning to hone down to a set of preverified packages. [For example], Verify NY: The idea was to understand the problem and what technologies you could apply. Could you deal with data, reconcile data, find patterns in data, make it useful to administrators, and deal with citizens submitting claim and the provider submitting a claim? That whole process of bringing it all together, was done as a first of kind project effort. Now what were doing is hardening this and making it repeatable. Its not about understanding mathematics, its about making it repeatable and making it available in the market. Some see the payback of saying well invest with you, IBM, well open our doors [to work with IBM to define the problem]. For many businesses, theyre looking for us to come forward with something much more prepackaged. In coming years it will be us coming up with technology packaged in something industry-specific, whether its in real-time crime fighting, in financial data giving a common view of the customer—multichannel—in manufacturing and distribution, product data. How do I deal with suppliers? Do I have redundancy? Is the supply chain clogged with duplicate material? To find lots of data in different formats, you find similar algorithms needed for sorting the data, but an applied use to deal with individuals in that area ... in Medicaid, [for example,] youre dealing with government employees. They dont have to learn algorithms themselves. Were prepared today to offer to the state of New York [a solution] because its counties pretty much administer Medicaid in the same way. The next step is to try to understand how applicable that is to other states. Trying to get a front-end environment so it can be used in various use scenarios. Its making the front end adaptive. This is a classic portal-type interaction environment. You can manipulate portlets, move things around, change structure for look and feel. Its very adaptive. We have to go through the process of improving adaptability. In every area of entitlement programs, you find the same iterative problem taking place. Once you know the domain and the way data is supposed to look and how its not supposed to look, you can adapt the capability to match the set of requirements. Youll see us do adaptation, packaging. Well dedicate people to specific solutions. Thats what scale-out is all about. Check out eWEEK.coms for the latest database news, reviews and analysis.


 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Thanks for your registration, follow us on our social networks to keep up-to-date
Rocket Fuel