It is a fact of life that software faults, defects and other weaknesses affect the ability of software to function securely. These vulnerabilities can be exploited to violate software’s security properties and force the software into an insecure, exploitable state. Dealing with this is a particularly daunting challenge given the ubiquitous connectivity and explosive growth and complexity of software-based systems.
As software and security professionals, we will never be able to get ahead of the game by addressing security solely as an operational issue. Attackers are creative, ingenious and increasingly motivated by financial gain. They have been learning how to exploit software for several decades – the same is not true for software engineers and we need to change this.
The objective of software security is to build better, defect-free software. Typically, software has many defects, and quite a few of these tend to be the source of security vulnerabilities that show up in our operational systems. Software developed with security in mind is more able to resist attack. And, in the face of a successful attack, it’s better able to tolerate the attack and recover from it as quickly as possible. This is a good thing. (For more on this, click here to listen to my May 27, 2008 CERT Podcast entitled “Building More Secure Software” – or visit www.cert.org/podcast).
Integrating Software Security Practices with Your Development Lifecycle
Project managers and software engineers should treat all software faults and weaknesses as potentially exploitable. Reducing exploitable weaknesses begins with the specification of software security requirements, along with considering requirements that may have been overlooked. Software that implements security requirements (such as security constraints on process behaviors and the handling of inputs, and resistance to and tolerance of intentional failures) is more likely to be engineered to remain dependable and secure in the face of an attack.
In addition, exercising misuse/abuse cases that anticipate abnormal and unexpected behavior can aid in gaining a better understanding of how to create secure and reliable software.
Developing software from the beginning with security in mind is more effective by orders of magnitude than trying to validate – through testing and verification – that the software is secure. For example, attempting to demonstrate that an implemented system will never accept an unsafe input (that is, proving a negative) is impossible. However, you can prove – using approaches such as formal methods and function abstraction – that the software you are designing will never accept an unsafe input.
In addition, it is easier to design and implement the system so that input validation routines check every input that the software receives against a set of predefined constraints. Testing the input validation function to demonstrate that it is consistently invoked, and correctly performed, every time input enters the system is then included in the system’s functional testing.
Using Analysis and Modeling to Strengthen Application Security
Analysis and modeling can serve to better protect your software against the more subtle, complex attack patterns involving externally forced sequences of interactions among components, or processes that were never intended to interact during normal software execution. Analysis and modeling can help you determine how to strengthen the security of the software’s interfaces with external entities and increase its tolerance of all faults. Methods in support of analysis and modeling during each lifecycle phase – such as attack patterns, misuse and abuse cases, and architectural risk analysis – are particularly helpful.
If your development organization’s time and resource constraints prevent secure development practices from being applied to the entire software system, you can use the results of a business-driven risk assessment to determine which software components should be given highest priority. A security-enhanced lifecycle process should (at least to some extent) compensate for security inadequacies in the software’s requirements by adding risk-driven practices and checks for the adequacy of those practices during all software lifecycle phases.
Security controls in the software’s lifecycle should not be limited to the requirements, design, code and test phases. It is important to continue performing code reviews, security tests, strict configuration control and quality assurance during deployment and operations to ensure that updates and patches do not add security weaknesses or malicious logic to production software.
Additional considerations for project managers include determining the effect of software security requirements on project scope, project plans, estimating resources and product and process measures.
Three Main Goals for Using Secure Software Practices
Adopting a security-enhanced software development process, which includes secure development practices, will reduce the number of exploitable faults and weaknesses in your operational software. Correcting potential vulnerabilities as early as possible in the software development lifecycle, mainly through the adoption of security-enhanced processes and practices, is far more cost-effective than attempting to diagnose and correct them after the system goes into production. It just makes good sense. There are three main goals of using secure software practices. They are as follows:
1. Exploitable faults and other weaknesses are eliminated to the greatest extent possible by well-intentioned engineers.
2. The likelihood is greatly reduced (or eliminated) that malicious engineers can intentionally implant exploitable faults and weaknesses, malicious logic or backdoors into the software.
3. The software is attack-resistant, attack-tolerant and attack-resilient to the greatest extent possible and practical in support of fulfilling the organization’s mission.
Software security practice selection and tailoring are specific to each organization and each project, based on the objectives, constraints and criticality of the software under development.
In addition to her work in security governance, Ms. Allen is co-author of “Software Security Engineering: A Guide for Project Managers” (Addison-Wesley, May 2008). She is also the author of the “CERT Podcast Series: Security for Business Leaders” (2006-2008) and “The CERT Guide to System and Network Security Practices” (Addison-Wesley, June 2001).
Ms. Allen received a BS in computer science from the University of Michigan, an MS in electrical engineering from the University of Southern California (USC), and an executive business certificate from the University of California at Los Angeles (UCLA). Her professional affiliations include ACM and IEEE Computer Society.
She can be reached at jha@sei.cmu.edu.