In light of some recent vulnerability announcements, research has shown OpenID to be an untrustworthy system. This is disappointing; I really liked OpenID. But it also conforms to some suspicions I had about it from early on.
OpenID is a protocol for authentication. One party, the OP (OpenID Provider)-it can be anyone, but some famous companies such as AOL and Yahoo have done it-sets up an OpenID provider to do authentication. Another party, the RP (Relying Party)-a site which needs to authenticate people, such as a blog-essentially calls over to the RP to do the authentication. Wouldn’t it be great if you could have one set of OpenID credentials instead of dozens of usernames and passwords? And the OpenID authentication could be strong, with multiple factors including biometrics. Anyway, that’s the appeal.
Unfortunately, Ben Laurie of Google’s Applied Security team and Dr. Richard Clayton of the Computer Laboratory at Cambridge University have shown that the security of this arrangement breaks down in practice. The vulnerabilities at issue are the Debian Predictable Random Number Generator and Dan Kaminsky’s DNS cache poisoning vulnerability. The communications in OpenID usually occur over HTTPS, and they have found that some OpenID providers use SSL certificates with weak keys (their list: openid.sun.com, www.xopenid.net, openid.net.nz-openid.sun.com has a cert generated by Debian? Ha ha!).
The weak certificates at such OPs means that it’s easy to generate their private keys and therefore easy to set up a fake OP that looks like the same thing. Combine this with the DNS cache poisoning attack and it becomes very plausible to set up an attack, at least a targeted attack, to capture OpenID credentials. Laurie and Clayton show how communications between the user and OP as well as between the OP and the RP are both completely compromised, and that mitigation is surprisingly difficult. For the most part, it required Web site administrators to follow good updating practices, which is like telling people to exercize and eat right. Everyone knows they should but a lot of people don’t do it anyway.
In addition to the vulnerabilities, another major issue exacerbating this problem is that many SSL client systems don’t check CRLs (Certificate Revocation Lists). Thus, if an OP with weak certificates revokes their own and generates a new one, RPs and users wouldn’t know it because they don’t check with the issuing authority.
Certificate revocation is a pet peeve of mine. It’s one of those aspects of the PKI that’s critical but not taken very seriously. Many browsers don’t support the best standards for it or even default to performing revocation checking. Both IE7 and Firefox 3 do a good job, though, and a much better one than their predecessors. If you’re running Firefox 2 you effectively have no revocation checking, even though the browser appears to support it.
Another issue is that CRLs are a crummy way to do revocation checking. They consist of a large, relatively static list of serial numbers of revoked certificates. If every check of an HTTPS site involves a check of that list then the browser and site are going to seem slow. So browsers aggressively cache the results, slowing down the dissemination of changes to the lists. A new standard for revocation checking, OCSP (Online Certificate Status Protocol), turns this into a question-response process, typically lowering communication costs and saving the client some parsing work. Both Internet Explorer 7 on Windows Vista and Firefox 3 support both OCSP and CRL and default to using OCSP. Opera 8.5 and later supports only OCSP and uses it by default.
Certificate revocation also plays a big part in code signing. Steve Jobs recently revealed that the iPhone checks back with Apple periodically to see if any apps on the phone have been deemed evil. The actual mechanism for this must be a CRL check (I checked recently and Apple’s certificate authority docs say nothing of OCSP).
The bottom line of this, as Laurie and Clayton say, is that revocation checking must be standard operating procedure before an OpenID system can be reliable.
In a long-ago column on OpenID, I speculated that RPs might not want to rely on just any OP. Perhaps reputation systems might even develop or accreditation systems might audit OPs to assure they follow good practices, There are some business model problems here, but maybe you see where I’m going. Even if OpenID is salvageable, the vulnerability of OpenID to these severe problems means that you can’t trust just anyone anymore.
The authors take some heat from the community on this Full Disclosure thread for “singling out” OpenID, when many other SSL-based services are probably as vulnerable. They singled it out because they identified specific vulnerabilities, not just theories. Unfortunately, more such identifications are probably forthcoming.
Security Center Editor Larry Seltzer has worked in and written about the computer industry since 1983.
For insights on security coverage around the Web, take a look at eWEEK.com Security Center Editor Larry Seltzer’s blog Cheap Hack.