Secure by Design: Developing Apps Without Flaws Takes the Right Tools

 
 
By Darryl K. Taft  |  Posted 2013-08-30 Email Print this article Print
 
 
 
 
 
 
 

Building secure enterprise applications starts at the design phase. But it has taken a long time to create tools that help ferret out code flaws and teach developers how to write better code.

Developers have long struggled with the security conundrum of how to quickly deliver apps that are as inherently secure as they are robust, reliable and efficient.

In today's fast-paced world of mobile, social, cloud and often complex enterprise applications, pressure is on developers to produce applications faster than ever. Yet, despite that pressure to deliver more apps faster, there is just as much pressure—brought on by those same mobile, social and cloud factors—to deliver applications that are more secure and reliable than ever before. What's a developer to do?

"Time-to-market pressure results in continually shrinking software delivery windows, while the business risks associated with software defects have never been greater," said Jennifer Johnson, chief marketing officer for Coverity, the maker of the Coverity Development Testing Platform, an integrated suite of software testing technologies for identifying and remediating quality and security issues during development. Coverity's platform automatically tests source code for software defects that could lead to product crashes, unexpected behavior, security breaches or even catastrophic failure.

According to IBM, application security vulnerabilities can be introduced in various phases of the development cycle. The requirements and design process fails to consider proper security; flaws are introduced inadvertently or purposely introduced into the code during the software implementation, or during deployment because a configuration setting did not match the requirements of the product within the computing environment—for example, when unencrypted communication is allowed over the Internet.

To limit such occurrences, IBM has instituted a structured development process for delivering secure applications called the Secure Engineering Framework, which recommends the use of automated security analysis tools and proven certified security components.

These tools include source code security analyzers, bytecode security analyzers, binary security analyzers, dynamic analysis tools and runtime analysis tools. Source code analyzers analyze application source code to locate vulnerabilities and poor coding practices. These tools can also trace user input through the application (code flow analysis, taint propagation) to uncover various injection-based attacks. Bytecode analyzers analyze application byte code (relevant for certain languages only) for the same vulnerabilities as source code analyzers. Binary analysis is very similar to source code analysis. However, instead of evaluating the source code, this analysis examines the application binary.

Dynamic analysis tools perform analysis of the application as a black box, without knowing its internal operation and source code. Dynamic analysis tools automatically map the application, its entry points and exit points, and attempt to inject input that will either break the application or subvert its logic.

Runtime analysis is not a specific security analysis technique or tool. It is the software development practice targeted at understanding software behavior during runtime—including system monitoring, memory profiling, performance profiling, thread debugging and code coverage analysis.

All these types of tools play a part in what Caleb Barlow, director of mobile security at IBM, likes to call security by design.

"The whole idea is to recognize that if you think about security in the design of your applications in the very beginning, and if you use security tools during the build process and during the development process, at the end of the day you'll actually save money—in addition to having a better protected application," Barlow said.

"The reason is fixing a security vulnerability early in the development lifecycle is very inexpensive. Let's say you find a bug and maybe it takes a half hour of a U.S. developer's time. What's that cost you, maybe $50 or $100. But if you find a bug late in the development cycle, you then have to go figure out where it was in the code, then you have to remediate it, then you have to retest, you have to rebuild, you have to repackage and maybe do multiple testing scenarios.



 
 
 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...

 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel