We can probably agree that software components are more useful when they can discover each others capabilities, and can negotiate with each other to do things that their authors didnt have to anticipate. Without this capability, its hard to see how grid computing systems can become more than the sum of their parts, or how next-generation “software agents” can do anything that existing scripting technologies cant do today.
But when components or network-based applications disclose too much about themselves, they raise developer and IT administrator concerns about protecting intellectual property and maintaining system security. Desirable innovations such as Javas bytecode modules or Microsofts .Net assemblies bring benefits in building more resilient systems, but also require added attention to protecting code from almost trivial methods of exposure.
At some point, moreover, any piece of software must reveal itself to hardware if anything useful is to happen: At that point, a sufficiently determined and skilled investigator can determine a great deal about how that software works.
But sometimes what looks like an insoluble problem is really just the shadow thats cast by a faulty assumption. For example, we take it for granted that if someone finds a vulnerability in one instance of a widely used piece of software, the corresponding attack can be used against other instances: A buffer overflow, for example, can be assumed to overwrite the same set of binary instructions on a target machine that it successfully corrupted on another test system that was running the same piece of code.
What if any given instance of a program might be one of almost uncountable variations, all produced by mathematically equivalent transformations of an instruction stream–no two of them alike? For that matter, what if I regularly regenerated the binary executables on any given system, each time with a randomly chosen transform, so that any attackers knowledge of flaws at even the level of machine code was only useful until the next time that I refreshed it?
I spoke last week with Alec Main, CTO at Cloakware Corp., about his companys efforts (some of which use technology licensed from Intel) to apply these ideas to critical applications. The obvious question, or so it seemed to me, is whether rearrangement of code unavoidably reduces performance: He agreed that this is a concern, but said that the companys tools give developers control over the balance between speed and security.
“Our product allows you to vary the overheads,” Main explained. “If youre trying to protect server code, you might choose to have low code expansion of only a few percent: An application might be protected, in critical areas, by an expansion of two times. Compute-intensive loops, you might protect in a different way.”
Theres only so much that we can do with perimeter security, as we try to put more self-contained capability and more communication into an ever-growing range of devices. Solving our problems at their centers, rather than protecting their ever-longer edges, seems like a sound idea: Application-level security, using Cloakwares and other approaches, is a strategy worth considering.
Discuss this in the eWEEK forum.
Tell me what you find obscure about securing components and applications.