Big Questions About OpenID After Recent Vulnerabilities

 
 
By Larry Seltzer  |  Posted 2008-08-13 Print this article Print
 
 
 
 
 
 
 

A variety of security issues have conspired to make OpenID an unreliable authentication method. It will remain that way unless some things change soon.

In light of some recent vulnerability announcements, research has shown OpenID to be an untrustworthy system. This is disappointing; I really liked OpenID. But it also conforms to some suspicions I had about it from early on.

OpenID is a protocol for authentication. One party, the OP (OpenID Provider)-it can be anyone, but some famous companies such as AOL and Yahoo have done it-sets up an OpenID provider to do authentication. Another party, the RP (Relying Party)-a site which needs to authenticate people, such as a blog-essentially calls over to the RP to do the authentication. Wouldn't it be great if you could have one set of OpenID credentials instead of dozens of usernames and passwords? And the OpenID authentication could be strong, with multiple factors including biometrics. Anyway, that's the appeal.

Unfortunately, Ben Laurie of Google's Applied Security team and Dr. Richard Clayton of the Computer Laboratory at Cambridge University have shown that the security of this arrangement breaks down in practice. The vulnerabilities at issue are the Debian Predictable Random Number Generator and Dan Kaminsky's DNS cache poisoning vulnerability. The communications in OpenID usually occur over HTTPS, and they have found that some OpenID providers use SSL certificates with weak keys (their list: openid.sun.com, www.xopenid.net, openid.net.nz-openid.sun.com has a cert generated by Debian? Ha ha!).

The weak certificates at such OPs means that it's easy to generate their private keys and therefore easy to set up a fake OP that looks like the same thing. Combine this with the DNS cache poisoning attack and it becomes very plausible to set up an attack, at least a targeted attack, to capture OpenID credentials. Laurie and Clayton show how communications between the user and OP as well as between the OP and the RP are both completely compromised, and that mitigation is surprisingly difficult. For the most part, it required Web site administrators to follow good updating practices, which is like telling people to exercize and eat right. Everyone knows they should but a lot of people don't do it anyway.

In addition to the vulnerabilities, another major issue exacerbating this problem is that many SSL client systems don't check CRLs (Certificate Revocation Lists). Thus, if an OP with weak certificates revokes their own and generates a new one, RPs and users wouldn't know it because they don't check with the issuing authority.

Certificate revocation is a pet peeve of mine. It's one of those aspects of the PKI that's critical but not taken very seriously. Many browsers don't support the best standards for it or even default to performing revocation checking. Both IE7 and Firefox 3 do a good job, though, and a much better one than their predecessors. If you're running Firefox 2 you effectively have no revocation checking, even though the browser appears to support it.

Another issue is that CRLs are a crummy way to do revocation checking. They consist of a large, relatively static list of serial numbers of revoked certificates. If every check of an HTTPS site involves a check of that list then the browser and site are going to seem slow. So browsers aggressively cache the results, slowing down the dissemination of changes to the lists. A new standard for revocation checking, OCSP (Online Certificate Status Protocol), turns this into a question-response process, typically lowering communication costs and saving the client some parsing work. Both Internet Explorer 7 on Windows Vista and Firefox 3 support both OCSP and CRL and default to using OCSP. Opera 8.5 and later supports only OCSP and uses it by default.

Certificate revocation also plays a big part in code signing. Steve Jobs recently revealed that the iPhone checks back with Apple periodically to see if any apps on the phone have been deemed evil. The actual mechanism for this must be a CRL check (I checked recently and Apple's certificate authority docs say nothing of OCSP).

The bottom line of this, as Laurie and Clayton say, is that revocation checking must be standard operating procedure before an OpenID system can be reliable.

In a long-ago column on OpenID, I speculated that RPs might not want to rely on just any OP. Perhaps reputation systems might even develop or accreditation systems might audit OPs to assure they follow good practices, There are some business model problems here, but maybe you see where I'm going. Even if OpenID is salvageable, the vulnerability of OpenID to these severe problems means that you can't trust just anyone anymore.

The authors take some heat from the community on this Full Disclosure thread for "singling out" OpenID, when many other SSL-based services are probably as vulnerable. They singled it out because they identified specific vulnerabilities, not just theories. Unfortunately, more such identifications are probably forthcoming.

Security Center Editor Larry Seltzer has worked in and written about the computer industry since 1983.

For insights on security coverage around the Web, take a look at eWEEK.com Security Center Editor Larry Seltzer's blog Cheap Hack.

 
 
 
 
 
Larry Seltzer has been writing software for and English about computers ever since—,much to his own amazement—,he graduated from the University of Pennsylvania in 1983.

He was one of the authors of NPL and NPL-R, fourth-generation languages for microcomputers by the now-defunct DeskTop Software Corporation. (Larry is sad to find absolutely no hits on any of these +products on Google.) His work at Desktop Software included programming the UCSD p-System, a virtual machine-based operating system with portable binaries that pre-dated Java by more than 10 years.

For several years, he wrote corporate software for Mathematica Policy Research (they're still in business!) and Chase Econometrics (not so lucky) before being forcibly thrown into the consulting market. He bummed around the Philadelphia consulting and contract-programming scenes for a year or two before taking a job at NSTL (National Software Testing Labs) developing product tests and managing contract testing for the computer industry, governments and publication.

In 1991 Larry moved to Massachusetts to become Technical Director of PC Week Labs (now eWeek Labs). He moved within Ziff Davis to New York in 1994 to run testing at Windows Sources. In 1995, he became Technical Director for Internet product testing at PC Magazine and stayed there till 1998.

Since then, he has been writing for numerous other publications, including Fortune Small Business, Windows 2000 Magazine (now Windows and .NET Magazine), ZDNet and Sam Whitmore's Media Survey.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel