The Massachusetts Institute of Technology (MIT) has devised a new debugging system that automatically and more efficiently finds common programming bugs.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed DIODE, or the Directed Integer Overflow Detection system, which checks for integer overflows.
Integer overflows are one of the most common bugs in computer programs — not only causing programs to crash but also potentially offering points of attack for malicious hackers. Computer scientists have devised a battery of techniques to identify them, but all have drawbacks, MIT said.
This month, at the Association for Computing Machinery’s International Conference on Architectural Support for Programming Languages and Operating Systems, MIT researchers are presenting their new algorithm for identifying integer-overflow bugs. The researchers tested the algorithm on five common open-source programs in which previous analyses had found three bugs. The new algorithm found all three known bugs along with 11 new ones.
The variables used by computer programs come in a few standard types, such as floating-point numbers, which can contain decimals; characters, like the letters in a sentence; or integers, which are whole numbers. Every time the program creates a new variable, it assigns it a fixed amount of space in memory.
If a program tries to store too large a number at a memory address reserved for an integer, the operating system will simply lop off the bits that don’t fit. “It’s like a car odometer,” said Stelios Sidiroglou-Douskos, a research scientist at CSAIL and first author on the MIT paper. “You go over a certain number of miles, you go back to zero.”
In itself, an integer overflow will not crash a program. In fact, many programmers use integer overflows to perform certain types of computations more efficiently. But if a program tries to do something with an integer that has overflowed, havoc can ensue, MIT said. For instance, if an integer represents the number of pixels in an image a program is processing and the program allocates memory to store the image, but its estimate of the image’s size is off by several orders of magnitude, the program will crash.
Moreover, any program can be represented as a flow chart — or, more specifically, a graph, with boxes that represent operations connected by line segments that represent the flow of data between operations. Any given program input will trace a single route through the graph. Prior techniques for finding integer-overflow bugs would start at the top of the graph and begin working through it, operation by operation.
For even a moderately complex program, however, that graph is enormous; exhaustive exploration of the entire thing would be prohibitively time-consuming. “What this means is that you can find a lot of errors in the early input-processing code,” said Martin Rinard, an MIT professor of computer science and engineering and a co-author on the new paper, in a statement. “But you haven’t gotten past that part of the code before the whole thing poops out. And then there are all these errors deep in the program, and how do you find them?”
Rinard, Sidiroglou-Douskos, and several other members of Rinard’s group, including Eric Lahtinen and Paolo Piselli and graduate students Fan Long, Doekhwan Kim, and Nathan Rittenhouse, take a different approach. The DIODE system begins by feeding the program a single sample input. As that input is processed, however — as it traces a path through the graph — the system records each of the operations performed on it by adding new terms to what’s known as a “symbolic expression.”
“These symbolic expressions are complicated like crazy,” Rinard said in a statement. “They’re bubbling up through the very lowest levels of the system into the program. This 32-bit integer has been built up of all these complicated bit-level operations that the lower-level parts of your system do to take this out of your input file and construct those integers for you. So if you look at them, they’re pages long.”
Meanwhile when the program reaches a point at which an integer is involved in a potentially dangerous operation, such as a memory allocation, DIODE records the current state of the symbolic expression. The initial test input will not trigger an overflow, but DIODE can analyze the symbolic expression to calculate an input that will, MIT researchers said
The process still is not over, however. Well-written programs frequently include input checks specifically designed to prevent problems like integer overflows, and the new input, unlike the initial input, might fail those checks. So DIODE seeds the program with its new input, and if it fails such a check, it imposes a new constraint on the symbolic expression and computes a new overflow-triggering input. This process continues until the system either finds an input that can pass the checks but still trigger an overflow, or it concludes that triggering an overflow is impossible.
If DIODE does find a trigger value, it reports it, providing developers with a valuable debugging tool. Indeed, since DIODE does not require access to a program’s source code but works on its “binary” code, or the executable version of the program, a program’s users could run it and then send developers the trigger inputs as graphic evidence that they may have missed security vulnerabilities.