eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.
1Setting Up Best Practices for Container Security
2For Starters, Here’s Why Containers Are So Welcomed
The attributes of Docker, Google Kubernetes, CoreOS and other containers are numerous: No heavy hypervisor. Exceptional portability. Resource isolation. Incredibly lightweight containers. Open standards. Perfect for micro-service architectures. Lots of tidy app packages all wrapped up and humming away on top of a single Linux instance. VMware is nervous about all this.
3Image Security Issues May Not Apply to Every Developer
4Be Aware of What Images You’re Using
All containers inherit a parent image, typically a base OS and its dependencies (a shell, default users, libraries and any dependent packages). As the Docker security page plainly explains: “One primary risk with running Docker containers is that the default set of capabilities and mounts given to a container may provide incomplete isolation, either independently, or when used in combination with kernel vulnerabilities.” So it’s up to the individual to set container capabilities and verify any images used with an eye to your security requirements, and that applies to every container. They can multiple quickly, so therein lies the central problem.
5Deploy Agents
Agents aid in setting security parameters for your containers because they automatically give you visibility into what’s coming along with the parent image. Since individual container security is the responsibility of the end user, you need a way to check for dependencies for yourself. Although images are constantly being scanned and shared and updated on Docker Hub, you cannot rely on listservs and issue reports to manage vulnerabilities. You should understand the underlying details of what you’re introducing into your shop, and then perform your own scans and verifications. Agents do this with little overhead at both the host server and Docker container level.
6Consider the Way You Run
One of the best ways to stay safe is to run Docker containers in read-only mode so they can’t be modified or accessed by anyone else. If you run in read-only mode, you don’t need an agent in every container, and you can reuse a verified parent image. If you do run in read/write mode, the best practice is to put an agent in every container. You should also set a rule against taking images from public repos and never run containers in privileged mode.
7Manage Container Interaction With the Outside World
A container can accept connections on exposed ports on any network interface—a large red flag from a security standpoint. A good idea is only to have a specific interface exposed externally, with services such as intrusion detection, intrusion prevention, firewall and load balancing run on it to screen any incoming public traffic. Container ports also should be bound to a specific host interface on a trusted host port.
8Strong Linux Administration Skills Required
Docker offers security enhancement capabilities, but none are on by default. Thus, it is critical to have a Linux pro establishing the basics for your Docker workflow and hardening the Linux host to prevent misconfiguration. (Most common mistakes with Docker occur when users set configurations incorrectly).
9Consider Using CIS Guidelines
Overall, your best strategy for enterprise Docker use is to meld the Center for Internet Security (CIS) guidelines into your existing security policy. They will guide you in establishing a secure configuration posture for all Docker containers and help you create a safer playing field for your dev teams to have at it.
10Get the CIS Guidebook on Containers
To establish best practices for your organization, the nonprofit CIS provides a detailed 119-page benchmark resource for safe and secure Docker configuration (PDF format), and there are a few specific areas of focus to keep in mind.