As it turns out, there is ample evidence that these offshore coders are not more capable than U.S. coders, and it seems to me that this should affect the calculations behind many offshore outsourcing decisions.
I fanned the flames of misperception in my weekly e-mail newsletter column on April 19. (You can subscribe for free, by the way, at www.eweek.com/newsletter_manage.) That column discussed the report, released at the beginning of April, of a national task force on "Security across the software development life cycle."
In commenting on the challenge of keeping developers training up-to-date, I said the following: "The report points out that for the last 30 years, India has put forth a concerted effort to provide high-quality university education in software design to their young people. As a result, India produces programmers that make fewer errors per line of software code than programmers trained in the United States."
Many letters in response were (mostly) polite variations on "Oh, yeah?"
One colleague of almost 20 years standing called this "the least factual statement Ive ever seen in one of your columns." He cited extensive experience in reviewing employment candidates and overseeing contract efforts involving software developers trained in India. In one failed project that had been staffed from that country, he found "uniformly atrocious" code with "almost a complete lack of error processing." He said he also found other cardinal errors, such as absence of referential integrity constraints. "There were pieces of code," he reported, "that were so bizarre that I cant even explain how anybody could have written them."
My friend was quick to add that some developers in India "are indeed among the best in the world," but he felt obligated to challenge any blanket endorsement of developer quality in India versus the United States.
That was not the only personal experience readers shared in contradicting the task force assertion. "The time required to make relatively simple code changes to existing software is far in excess of what would be considered normal," said another IT pro with connections to projects in India. "Additionally, when the fix is finally completed, there [is] a significant amount of errors that must be repaired in the fixed code."
Even if developers make fewer initial errors, poor response to changes in requirements means a longer time to deliver the code thats needed—perhaps longer than the customer can afford to wait. If so, measurements of the error rate in coding to obsolete specifications are, at best, of academic interest. And, as weve heard already, if only anecdotally, that quality difference is itself suspect.
Whats needed to render a verdict on the task force claim is something more than anecdotes—and the evidence is right at hand.
When I Googled "software defect rate india programmers," the second-highest-ranked hit that came back was a report, published only last June, from researchers at the Massachusetts Institute of Technology, Harvard, the University of Pittsburgh and Hewlett-Packard.
Examining more than 100 projects coded by developers in India, Japan, the United States, and "Europe and other," the report tabulated an overall defect rate (errors found in the first year of use per thousand lines of source code) of 3 percent worldwide—that is, three defects per 100,000 lines. Regionally—may I have the envelope, please—India exceeded that average with 3.3 percent, compared with 3 percent for the United States. "Europe and other" did less well, at 5 percent, while Japan did impressively well, at only 0.5 percent.
Theres no space here to get into the analysis of differences in processes, in project types or in other noncultural factors that might also be affecting these results. But theres clearly room for doubt when anyone makes blanket statements about how they do it better offshore.
Technology Editor Peter Coffee can be reached at firstname.lastname@example.org.
Note: This column has been widely quoted, so I feel obligated to note here that the statistics I cite were affected by a data reduction error. That error has since been corrected by the research team at the MIT Center for eBusiness. Actual defect rates, it turns out, were generally higher than stated here, and the U.S. rate was notably higher than the rate for other regions, but there are reasons why that comparison may be misleading: Ill discuss these in my column of May 17.
I regret any confusion resulting from my reliance on the initial report, which was not labeled as a draft or working paper, but which was revised before publication in the journal IEEE Software for November/December 2003. The continued presence of the earlier version on the MIT Centers Web site, Im told by one of the authors, was unintended.
Be sure to add our eWEEK.com developer and Web services news feed to your RSS newsreader or My Yahoo page: