Techniques like those used in modern CPUs could change RDBMS design.
From our Department of Being Careful What You Wish For, we suggest that the last thing any software team really wants is broad public attention to its first "almost good enough" effort.
"When you release 1.0, you might want to actually keep it kind of quiet," admonishes Joel Spolsky of New York-based Fog Creek Software Inc. "If you market it and promote it too heavily, when people see what youve actually done, they will be underwhelmed," Spolsky continues in a well-crafted essay entitled "Good Software Takes Ten Years. Get Used To It."
You can tell when a development team respects the same realities that Spolsky describes, whether or not theyve ever seen his comments: These are the companies that come to me with a release 2.x product that seems like an obvious winner, and I wonder why its escaped my notice during its 1.x iterations. "Weve been working with a small number of early adopters," they usually reply: "We think its ready now for broader marketing." Theyre usually right.
One recent example is the ANTs Data Server
from Burlingame, Calif.-based ANTs Software Inc. Im still in the process of talking with customers that are using the companys relational database engine, for which the company claims "from eight to eighty-one times" the performance achievable with conventional database systems, but I wanted to give you an early heads-up on the thinking that I see in the product. It may inspire other teams to shatter their own legacy assumptions.
Now being broadly released as version 2.2, ANTs is not an in-memory database,
although it does use memory up to available limits. It can also overflow to disk, so that its not limited by the size of main memory as are some other high-performance data management solutions.
But during my conversation with engineers at ANTs, I got the feeling that I was hearing the same kind of out-of-the-box thinking that transformed microprocessor design during the 90s. The parallel seems pretty close: I think of AMDs K6 and follow-on CPUs, for example, as mainstreaming the idea of throwing out traditional assumptions about sequential operations and efficient use of computational subsystems. Those and other modern chips
use some of that cheap Moores-Law silicon to do fantastically complicated analysis and re-ordering of low-level operations, so that none of the available computing power is ever idle.
In the same way, the ANTs technology
looks for ways to turn large groups of nonconflicting updates into batches of concurrent operations, greatly reducing network and processing overhead.
The results that ANTs engineers claim, some will feel, sound too good to be true: People often react with suspicion to claims that something can be sped up by a factor of 10, let alone by several multiples of 10. Im reminded of Neal Stephensons fictional environmental activist, in the novel "Zodiac,"
Sangamon Taylor, who would only say that a polluter was releasing "more than two times" the allowable concentrations of toxic chemicalsbecause people would treat him as a crackpot if he told the truth and said that the release was 10,000 times the allowable limit.
But thats why Ill be talking soon with users of ANTs technology. The applications for a database management solution that runs orders-of-magnitude faster than what we have now, as we seek to meet the demands of ever more people hitting databases ever more often, are worth the effort of investigation.
Tell me what you most want from the databases behind your Web applications at firstname.lastname@example.org
Check out eWEEK.coms Developer & Web Services Center
for the latest news, reviews and analysis in programming environments and developer tools.
Be sure to add our eWEEK.com developer and Web services news feed to your RSS newsreader or My Yahoo page