SEATTLE—Running Docker containers securely as part of a DevOps pipeline is a process that has many steps and requires diligence. That’s the message coming from Cem Gurkok, lead information security engineer at Salesforce, in a session at the DockerCon 16 conference here.
While containers do represent a somewhat different paradigm for developers, security professionals might have a different view.
“Container threats are not all that different from traditional environments,” Gurkok said.
That said, while the same types of threats that exist in regular IT environments are also a risk for containers, there are many tools and processes in place that can help reduce or eliminate the risks. Gurkok noted that in IT, security professionals always like to talk about the “known knowns,” that is, the items that are already understood to be a risk.
To that end, there are multiple techniques that can be used to secure containers against known risks. One of the most basic risks is that of sending data in the clear, without the benefit of encryption. Gurkok advocates the use of Transport Layer Security (TLS) through a container development pipeline.
“By using a secure channel, you’re making sure data in transport isn’t easily compromised by a classic man-in-the-middle attack,” he said.
When it comes to application container images, Gurkok pointed to the Docker Trusted Registry (DTR) as a best practice. DTR enables an organization to make use of cryptographically signed and validated applications. However, it’s important to enforce a separation of images that are signed for use in development from those that are signed for use in production, he said.
Docker as a technology runs on an operating system host, and as such there is a need to make sure that host is secure. As an obvious suggestion, Gurkok strongly advocated that Docker users keep their host operating systems fully patched to protect against the risks of known vulnerabilities.
There are also known configuration best practices when installing and configuring Docker on a host. One of the best ways to identify whether best practices are being used for Docker configuration is with the Docker Bench tool, which provides a baseline assessment for Docker deployment security. Docker first announced the Docker Bench tool in May 2015, based on a 119-page report full of best practices and published by the Center for Internet Security.
Once organizations are running containers in production, there are still risks from rogue applications or users. Gurkok suggests employing robust monitoring and logging technologies. For monitoring, it’s important to capture networking traffic on both the physical and virtual network, he said.
With all the logged data, another best practice is to use a Security Information and Event Management (SIEM) platform to keep track of the logs and analyze them if and when trouble is detected.
Going a step further, Gurkok said he makes use of memory forensics. Among the tools that he uses to perform memory forensics is the Volatility Framework, which includes a collection of open-source tools.
“Nothing can hide in memory,” Gurkok said. “It’s also faster than disk forensics, and you only have to focus on a smaller representation of what’s running and what’s doing things on a system.”
Key areas of container security, according to Gurkok, are platform security, content security, access controls and monitoring capabilities. More importantly, though, Gurkok said that while a lot of people have security plans on paper, the key is to actually execute on those plans and test them regularly.
Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.