Computer security has two fundamental problems. The first is that computers dont know very much. The second problem is that people forget the first.
Vulnerabilities like the one reported this week involving possible pathways for abuse in Microsofts ASP.Net illustrate the result of this combination of flaws: They remind us of what happens when ignorant machines are controlled by naïve code.
Computer security systems try to overcome Problem 1—general cyber-cluelessness—with a massive structure of pretense. When a system demands a password, or insists on a decryption key before disclosing data, its like a raw recruit on guard duty: It has no idea what its protecting, but it knows that it was told to say, “Halt! Who goes there?” Its easy to forget that the system knows nothing about the actual value of the information that its guarding.
This might seem obvious, but entire classes of computer security problems arise from this weakness of the systems world view. Unable to tell the difference between the valuable stuff and the garbage, the system can only protect locations of data—or the pathways that lead to those places. If an attacker can find a new way to describe a location or a path, such that the system fails to recognize the scheme, the entire security model may collapse.
People avoid many stupid security errors by using common sense that is difficult to duplicate in the world of the machine. A small child can tell the difference between a Rolex and a plastic toy wristwatch that doesnt even tell time. If you asked that child to hand you “that watch over there,” the child would have some sense of whether the requested object was actually valuable or not. In general, the data in our machines doesnt have that kind of intrinsic metadata to warn when its being misused—nor can we even assign an intrinsic value to many data objects.
If you asked a somewhat older child to hand you “that box over there that says Rolex,” the difference between an empty box and one that was heavy enough that it might contain a gold watch would be immediately obvious—and might trigger investigation before that box was released. That kind of common-sense behavior is even more difficult to describe, let alone encode, than simple static metadata: It depends on rather sophisticated rules about the value of different combinations of data to different parties at different times.
People also have a sense of whats abnormal, and therefore suspicious. If you blindfold a child and say, “Please take five steps forward, move your hand six inches to the right, grasp whats there, and bring it to me,” the child is going to peek and see what he or she is delivering. Its trivial, by comparison, to obscure the description of a pathway through a directory tree or a memory address space, in such a way that logic intended to limit data access is fooled—and lets someone in through a back door.
Developers have limited options in trying to overcome these problems. Legacy databases make no provision for content-based security: They rely on developers to lock the proper doors. Legacy applications lack the facilities needed to block their own misuse as tools for unauthorized data access.
The granular security controls of Java or of Microsofts .Net offer plausible hope that things will get better, but only when development teams start putting security as high on their lists of priorities as application performance.