I have kids playing fall baseball and softball. I volunteered to keep score, mostly because I read Moneyball over the summer and I became fascinated with all the statistics that baseball keeps. Wouldnt it be great if we had some sort of statistical history to tell us which software is better than the other?
Of course even in baseball, people will argue over who was better or which statistics should be more heavily weighed than another. We in IT have market share, which is one way to keep score. Its the closest thing we get to wins and loses in the software business. But does it tell us which software was the best or is it simply an indication of which team (vendor) was the luckiest or had momentum at the right time?
The closest thing we have to a playoff is, of course, the proof-of-concept. Some IT organizations try and do these on their own. Usually its a paper comparison based on some criteria that was placed in an RFP.
Of course, the problem with most RFPs is that no one in the IT organization has the time to write one. In my experience, a vendor usually writes the bulk of the RFP. Now sometimes this happens subtly. A person, given the responsibility for creating the obligatory RFP, finds a document on Web site. It probably has a title like Relational Database Infrastructure: A Holistic Approach or A Buyers Guide to Relational Databases.
Well you cant expect the vendor to call it How to Choose Our Database Without Appearing Biased. I have even been on the vendor side when the customer asked us to write the RFP for them.
Of course, the problem with the RFP process is that the vendor, not the IT organization, answers the questions. Its like asking a presidential candidate to ask the questions he will answer in a debate. I dont want to disparage the entire RFP process, but it is often a giant waste of everyones time.
The benchmark is another way we can pit one software option versus another. Of course, the problem with an in-house benchmark is that they can be time consuming and costly to do on a scale that might actually come close to approximating the actual environment that the software will be asked to handle.
In some cases, there are standard industry benchmarks that users can look at. In the server and database world we have the Transaction Processing Performance Council, in which a number of vendors both on the hardware and software side participate.
The TPC develops and runs benchmarks that test the theoretical throughput of an entire system from client devices to backend database and storage. Vendors spend large sums of money to complete these benchmarks, and the results are independently audited. For the most part, I believe benchmarks can be a useful data point for buyers, as long as they intend to buy the exact same configuration shown in the benchmark. Of course, that is highly unlikely.
Any benchmark, if it is well understood, can become subject to artificial performance optimizations. One of the great services that the TPC offers is the detailed disclosures. Here you get to see where the vendors spent their money. Sometimes its on massive amounts of memory, other times its massive amounts of disk. You really need to dig underneath the covers to gain some true insight.
Vendors are allowed to use discounted pricing as long as it is within the range of discounts offered to customers for similar-size deals. Because of this, buyers can use the information contained within the TPC full disclosure or executive summary to get a sense for where contract price negotiations should begin.
Another data point—perhaps the most useful—is to get feedback from other companies about their experience with a vendor and/or a specific product. A great place to start might be to participate in regional user group meetings or attend a user groups annual conference. There you can network with other companies and get a clearer picture of what your organization might be in for, good or bad.
If you cant do that, the analyst community can be helpful. Analysts speak with many companies so they get a sense for what prevailing opinion is about a vendor or product.
Now that model is evolving. For example new companies are taking the traditional industry analyst model and changing the dynamic. Evalubase Research is an example of a community approach to product reviews. They invite users to join and submit reviews about software products. Evalubase captures that information and graphs the results so members can see what the consensus opinion is on any product.
What I like about the model is that it is free to view the results as long as you submit an evaluation yourself. The community model that Evalubase uses has great potential to provide another interesting data point to your organizations buying process.
Another interesting example is startup Diogenes Analytic Laboratories, which performs hands-on testing and publishes results. Their model reminds me of Consumer Reports, but for backup and storage hardware and software. So unlike the other benchmark examples, this is a vendor-free zone. I, for one, am anxious for them to expand their scope of offerings they can benchmark.
So the good news is that we have more resources than ever from which to cull information about potential software purchases. Of course, all of that information has to be condensed and placed in context with your organizations specific situation.
Ultimately only you know all of the relevant data points—technical, political, and economic—that will enable your organization to make the best choice. If you are a non-IT manager, just be aware that there often is no clear right answer, but there are enough resources at your disposal to at least challenge conventional thinking before a huge commitment is made.
Charles Garry is an independent industry analyst based in Simsbury, Conn. He is a former vice president with META Groups Technology Research Services. He can be reached at cegarry@yahoo.com.