DockerCon Thoughts – Secure, Sufficient Applications


I got to spend a couple of days this week at DockerCon and learn a bit more about software containers. I’d always assumed that containers were a slightly different form of virtualization, but thankfully I’ve learned my lesson there. What I did find out about containers gives me a bit of hope about the future of applications and security.

Minimum Viable App

One of the things that made me excited about Docker is that the process isolation idea behind building a container to do one thing has fascinating ramifications for application developers. In the past, we’ve spent out time building servers to do things. We build hardware, boot it with an operating system, and then we install the applications or the components thereof. When we started to virtualize hardware into VMs, the natural progression was to take the hardware resource and turn it into a VM. Thanks to tools that would migrate a physical resource to a virtual one in a single step, most of the first generation VMs were just physical copies of servers. Right down to phantom drivers in the Windows Device Manager.

As we started building infrastructure around the idea of virtualization, we stopped migrating physical boxes and started building completely virtual systems from the ground up. That meant using things like deployment templates, linked clones, and other constructs that couldn’t be done in hardware alone. As time has rolled on, we have a method of quickly deploying virtual resources that we could never do on purely physical devices. We finally figured out how to use virtual platforms efficiently.

Containers are now at the crossroads we saw early on in virtualization. As explained by Mike Coleman (@MikeGColeman), many application developers are starting their container journey by taking an existing app and importing it directly into a container. It’s a bit more involved than the preferred method, but Mike mentioned that even running the entire resource pool in a container does have some advantages. I’m sure the Docker people see container adoption as the first step toward increased market share. Even if it’s a bit clumsy at the start.

The idea then moves toward the breakdown of containers into the necessary pieces, much as it did with virtual machines years ago. Instead of being forced to think about software as a monolithic construct that has to live on a minimum of one operating system, developers can break the system down into application pieces that can execute one program or thread at a time on a container. Applications can be built using the minimum amount of software constructs needed for an individual process. That means that those processes can be spread out and scaled up or down as needed to accomplish goals.

If your database query function is running as a containerized process instead of running on a query platform in a VM then scaling that query to thousands or tens of thousands of instances only requires spinning up new containers instead of new VMs. Likewise, scaling a web-based app to accommodate new users can be accomplished with an explosion of new containers to meet the need. And when the demand dies back down again, the containers can be destroyed and resources returned to the available pool or turned off to save costs.

Segment Isolation

The other exciting thing I saw with containers was the opportunity for security. The new buzzword of the day in security and networking is microsegmentation. VMware is selling it heavily with NSX. Cisco has countered with a similar function in ACI. At the heart of things microsegmentation is simply ensuring that processes that shouldn’t be talking to each other won’t be talking to each other. This prevents exposure by having your app database server visible on the public Internet, for instance.

Microsegmentation is great in overlay and network virtualization systems where we have to take steps to prevent systems from talking to each other. That means policies and safeguards in place to prevent communications. It’s a great way to work on existing networks where the default mode is to let everything on the same subnet talk to everything else. But what if the default was something different.

With containers, there is a sandbox environment for each container to talk to other containers in the same area. If you create a named container network and attach a container to it, that container gains a network interface on that particular named network. It won’t be able to talk to other containers on different networks without creating an explicit connection between the two networks. That means that the default mode of communications for the containers is restricted out of the box.

Imagine how nice it will be to create a network that isn’t insecure by default. Rather than having to unconnected all the things that shouldn’t speak, you can spend your time building connections between the things that should be speaking. That means a little mistake or forgotten connection will prevent communications instead of opening it up. That means much less likelihood that you’re going to cause an incident.

There are still some issues with scaling the networking aspect of Docker right now. The key/value store doesn’t provide a lot of visibility and definitely won’t scale up to tens or hundreds of thousands of connections. My hope is that down the road Docker will implement a more visible solution that can perform drag-and-drop connectivity between containers and leave an audit trail so networking pros can figure out who connected what and how that exposed everything. It also makes it much easier when the connection between the devices has to be explicit to prove intent or malice. But those features are likely to come down the road as Docker builds a bigger, better management platform.

Tom’s Take

I think Docker is doing things right. By making developers look at the core pieces they need to build apps and justify why things are being done the way they’ve always been done, containers are allowing for flexibility and new choices to be made. At the same time, those choices are inherently more secure because resources are only shared when necessary. It’s the natural outgrowth of sandboxing and Jails in the OS from so many years ago. Docker has a chance to make application developers better without making them carry the baggage of years of thinking along with them to a new technology.


One thought on “DockerCon Thoughts – Secure, Sufficient Applications

  1. Pingback: DockerCon Thoughts – Secure, Sufficient Applications - Tech Field Day

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s