Trust Is Back In Style

Opinion: Trust models and signed code are indispensible in real-world computing, but can they ever really be made usable and affordable enough?

Download the authoritative guide: The Ultimate Guide to IT Security Vendors

Ive had an interest in digital signatures for code signing for a while, but it seems to me theres been a lot of action lately in the field.

Code signatures dont get a lot of respect from people who consider them an inconvenience, either for legitimate or illegitimate use. Opponents often disparage them as imperfect and therefore not worth bothering with. You can guess what I think of this argument.

And of course the trusted authority takes on some heavy responsibilities with the power to revoke a signature, putting a monkey wrench in a developers business, as happened when VeriSign and Microsoft conspired to revoke Linchpin Labs certificate. Microsoft has long been a proponent of code signing and used it to some benefit, but Windows developers dont use it as often as they might for a bunch of reasons.

Even so, code signing is a growing business, and because its essential. Its true that its imperfect, wildly so, but it does things which are necessary and which cant be done any other way. Microsoft is hardly alone in allowing certain privileges, such as kernel-level code status, to signed code.


Check out eWEEK.coms for the latest news, reviews and analysis on mobile and wireless computing.

Apple took a similar approach when they added support for code signing in Leopard. Some signed applications are allowed to communicate automatically past the firewall, through the back door as it were, even when the firewall is configured to block all incoming connections. Unsigned applications get blocked.

Apple also granted a little deference to signed code in its recent update of QuickTime for Java to version 7.3, which removed support for QuickTime access for untrusted, i.e. unsigned, applets.

And in the world of smart wireless phones, code signing is the norm, and a far bigger mess than on PCs and Macs. Several parties are involved in putting apps on the phones and often its unclear who controls the trusted roots used by the phones to test applications. Consequently, its hard for developers to know what signatures they should use for an application.

People I know who know their way around the business tell me that the Java Verified people have their act together better than most; if you pass a verification test with an authorized independent lab, your code is signed by GeoTrust and returned to you from the Java Verified office itself.

But even with Java, it doesnt always end there; sometimes a mobile operator will decide they want to do only their own signatures and they will strip out the Java Verified signature (easy to do since its in a separate file), Operators usually run the show with respect to code signatures on mobile phones. They want to control the entire experience, including what apps you run, and they do this with code signatures.

Looking at it from their point of view, theyre using code signatures not only as an authentication mechanism to prove who the entity was that signed the program. Because they generally control the trusted roots, mobile phone operators use code signing to control what code can run on the phones and to accredit that code, in effect to endorse it as safe. This is far beyond what Microsoft tries to do with drivers on 64-bit Vista.

Is code safety on mobile phones a real issue? Its been a "next year" problem for many years, but not much of a real-world issue. The signed code means that malware is a very hard proposition for most platforms, and exploits probably need to happen through vulnerabilities.

Google didnt specifically mention a signing system in its announcement of Android, the open mobile platform, but if they dont include one and put some serious thought into how trusted roots and permissions are managed, theyre putting a big "ABUSE ME" sign on the platform.

The lack of a signing process is one of the big mistakes Apple made when they released the iPhone. Hackers have had no problem "jail breaking" software onto the system, after which conventional package managers can be used to do installs of whatever you want to run. You can bet your kids new iPod Touch that when Apple releases a real iPhone SDK in February it will include a signing mechanism.

What else can they do? Even if the provider doesnt make all the decisions, at least users need to be able to decide what runs on their systems, and they cant know that with any confidence without knowing who wrote the program. Thats why youll see more of code signatures in the future, even if they make life harder for everyone.

Security Center Editor Larry Seltzer has worked in and written about the computer industry since 1983.


Check out eWEEK.coms for the latest security news, reviews and analysis. And for insights on security coverage around the Web, take a look at Security Center Editor Larry Seltzers blog Cheap Hack

More from Larry Seltzer