Enterprise IT managers and CIOs, growing impatient with security vulnerabilities, are fighting back with language in contracts that holds software companies liable for breaches and attacks that exploit their products.
This trend illustrates a shift of responsibility for attacks and virus outbreaks from users and IT staffs to vendors, which many customers feel have been lax in security development.
For example, a Fortune 50 company recently wrote a clause into a contract with a major software company that holds the vendor responsible for any security breach connected to its software, according to sources familiar with the deal.
The language in the clause is strict, with references to many security problems—including viruses, worms, back doors and Trojan horses—but the vendor had little choice but to sign the contract, given the size and economic clout of the customer, sources said.
But while the vendor is held liable, monetary damages are not spelled out. Experts speculate the penalties would include costs related to cleanup, hardware and software replacement, and lost business.
Security insiders said the new language marks the beginning of a trend for enterprise buyers, one that benefits all users.
“That language is going to become more and more prevalent,” said Chris Darby, CEO of @Stake Inc., a security consultancy and research organization in Cambridge, Mass. Darby said such contracts will result in more rigorous development and better products.
Users say pressuring the vendors into more diligent development is a good start, though it likely wont cure all security problems.
“Contractual liability is a great motivator. Im encouraged that liability for vulnerabilities is entering into contracts,” said Karl Keller, president of IS Power Inc., in Thousand Oaks, Calif. “Secure programming is a mind-set. It may start with a week or two of training, but it will require constant reinforcement. And managers must learn that programmers need to take the time to architect, design and test their code. You can be sure that when relatively simple buffer overflows are conquered, fewer but more sophisticated vulnerabilities will be found.”
In response to the push for liability, many software vendors, most notably Microsoft, have begun training their developers in so-called secure coding practices. Seeking to eradicate vulnerabilities such as buffer overruns, Microsoft has put each of its coders through an intensive training program as part of its Trustworthy Computing initiative.
The developers trained for a week on how to write secure code, then spent three weeks putting the lessons into practice. Much of the training focused on subjects such as threat analysis and modeling, and Microsoft officials said they have begun to see a difference in the code turned out by developers.
“Absolutely, [weve seen a difference]. Its what we expected to happen,” said Steve Lipner, director of security assurance at Microsoft, in Redmond, Wash. “The … result is that weve changed for good the way that code is written and tested.”
Although many security experts and industry insiders have questioned the extent of Microsofts commitment to its Trustworthy Computing initiative, @Stakes Darby believes that the company is sincere in its efforts and will ultimately succeed in changing the way things are done. “I believe Microsoft means what they say, and I believe theyre going to lead,” Darby said.
Some of Microsofts larger customers have asked the company for details of the training so that they can apply it to their in-house developers.
Darby said his company is seeing a lot of demand for its secure coding training services, which he said is indicative of a shift in thinking in the technology industry.
“I think were seeing an end to the silver bullet mentality of security,” Darby said. “People are taking a much more holistic view. The return is so much more compelling if you add security at the design phase.”
However, some in the security field said that while secure coding is a good idea, a weeklong training course wont solve all a vendors security problems.
“You can teach people to use secure function calls, but you wont stop them from making mistakes,” said Scott Blake, vice president of information security at BindView Corp., in Houston, and head of the companys Razor research team. “You can eliminate some of the low-hanging fruit like buffer overflows, but people will just make different mistakes. Maybe theyll take longer to find, but it doesnt make the problem go away.”
Related stories:
- Security Proposal Nearly Ready for Inspection
- Taking on IT Security
- Proposal Calls for Quick Response to Flaw Discoveries
- Commentary: Security: Time to Take Names, Lay Blame
- Security Quandary: Whos Liable?
- Software Liability Gaining Attention