Categories: CloudVirtualisation

How To Secure Containers Is Still Subject To Debate

In the world of virtualized application containers, security is top-of-mind. At both the DockerCon EU event last month in Barcelona, Spain, as well as the Tectonic Summit last week in New York City, the big news was all security-related. While there is no shortage of container security news, there is still some debate about how to properly secure containers.

Docker Inc., the lead commercial sponsor behind the open-source Docker project, announced multiple security efforts at DockerCon EU, including project Nautilus for Docker application image scanning. Not be outdone, CoreOS, one of Docker Inc.’s primary rivals in the container market, announced Distributed Trusted Computing at its Tectonic Summit event.

In some respects, the technologies announced by CoreOS and Docker Inc. for container security are similar, though they take different approaches. While Docker Inc. announced project Nautilus for scanning application images, CoreOS has its Clair project that scans container images for known vulnerabilities.

Chain of trust

When it comes to hardware encryption, CoreOS is using Trusted Computing concepts, including the use of Trusted Platform Module (TPM) hardware, in order to create and enable a chain of trust for container applications running on hardware that can be audited and verified. Docker is also using hardware for security but in a different way. At DockerCon EU, the company gave away Yubico USB keys that can be used to sign private encryption keys for application images.

There is also some debate about how containers should be run on a system. I moderated a panel at the Tectonic Summit that included Matthew Garrett, principal security software engineer at CoreOS; Tim Hobbs, advisor, product management at CA Technologies; and Frank Macreery co-founder and CTO, Aptible

One of the key questions I asked the panel was whether it was a best practice to run a container inside a hypervisor. The consensus from the panel was that, today, in order to get the best isolation and security control, the use of a hypervisor is a good best practice. It’s a model that CoreOS embraces as well with its rocket (rkt) container engine that integrates with Intel’s Clear Containers technology. Clear Containers is a virtualization hypervisor that has been purpose-built for running containers.

Identity is also a hot topic in IT security. CoreOS has an open-source identity technology called Dex that can help organizations with user access control for container applications.

Meanwhile, CA has a long history of user identity technologies. On my panel, CA’s Hobbs emphasized that integrating with existing forms of user authentication and policy control is important for containers, as with all other forms of application deployment.

A key driver for a lot of security expenditures is regulatory compliance-related efforts. At DockerCon EU, Udo Seidel, chief architect and digital evangelist at Amadeus, detailed how his organization has managed to use Docker containers to achieve Payment Card Industry Data Security Standard (PCI DSS) compliance.

On my own panel at the Tectonic Summit, Macreery explained how Aptible is able to use containers for its U.S. Health Insurance Portability and Accountability Act (HIPPA) requirements. For both PCI DSS and HIPPAA, data privacy is paramount, which is something that the isolation properties of containers can help enable.

While interest in containers has grown significantly in the last two years, it’s important to remember that containers as a technology concept have been around for many years. In Linux, there are LXC (Linux Containers); in Solaris Unix, there are Zones; and in FreeBSD, there is the concept of Jails. During my panel, a member of the audience wanted to know what’s new with container security, given that containers as a technology construct are not new.

The answer I gave is the same that I gave a decade ago, when VMware’s momentum was growing and people reminded me that IBM had been doing virtualization for 50 years. The answer was that applications are the difference, as is increased production deployment at scale in distributed systems. Additionally, though the attack surface of containers and the applications that run in them are not new, those that are deploying containers might be new to security best practices that have already been learned in the industry.

While the basic ideas behind securing containers are now in place, it’s likely that there are some ideas and concepts that have yet to emerge. What never ceases to amaze me is how emphasis and effort from the security research community exposes vulnerabilities in nearly all classes of software and infrastructure. No doubt, as container deployments grow, security researchers will turn their attention to the technology and new vulnerabilities will be discovered.

Originally published on eWeek.

Sean Michael Kerner

Sean Michael Kerner is a senior editor at eWeek and contributor to TechWeek

Recent Posts

Google Jarvis AI Extension Leaked On Chrome Store

Seemingly accidental leak reveals Google is developing Jarvis AI extension that can browse the web…

6 hours ago

Amazon Mulls New Multi-Billion Dollar Investment In Anthropic – Report

Amazon is reportedly in talks to pump billions of dollars more into AI start-up Anthropic,…

9 hours ago

FTX’s Caroline Ellison Begins Her Two Year Prison Sentence

Star witness for the US prosecution of FTX founder Sam Bankman-Fried, has begun her two…

10 hours ago

More Layoffs For iRobot Staff After Abandoned Amazon Deal

After axing 31 percent of its workforce when it failed to be acquired by Amazon,…

1 day ago

Mozilla Foundation Confirms Layoffs, Eliminates Advocacy Division

Mozilla Foundation axes 30 percent of its staff, and is eliminating its Advocacy Division that…

1 day ago

Google To Make MFA Mandatory Next Year

Improving security. Mandatory multi-factor authentication (MFA) is coming to the Google Cloud by the end…

1 day ago