It’s Not The Size of Your Conference Community

CLUS2016Tweetup

Where do you get the most enjoyment from your conference attendance? Do you like going to sessions and learning about new things? Do you enjoy more of the social aspect of meeting friends and networking with your peers? Maybe it’s something else entirely?

It’s The Big Show

When you look at shows like Cisco Live, VMworld, or Interop ITX, there’s a lot going on. There are diverse education tracks attended by thousands of people. You could go to Interop and bounce from a big data session into a security session, followed by a cloud panel. You could attend Cisco Live and never talk about networking. You could go to VMworld and only talk about networking. There are lots of opportunities to talk about a variety of things.

But these conferences are huge. Cisco and VMware both take up the entire Mandalay Bay Convention Center in Las Vegas. When in San Francisco, both of these events dwarf the Moscone Center and have to spread out into the surrounding hotels. That means it’s easy to get lost or be overlooked. I’ve been to Cisco Live before and never bumped into people I know from my area that said they were going, even when we were at the same party. There are tens of thousands of people roaming the halls.

That means that these conferences only work well if you can carve out your own community. Cisco Live has certainly done that over the years. There’s a community of a few hundred folks that are active on social media and have really changed the direction of the way Cisco engages with the community. VMworld has their various user groups as well as VMUnderground constantly pushing the envelope and creating more organic community engagement.

You Think You Know Me

The flip side is the smaller boutique conferences that have sprung up in recent years. These take a single aspect of a technology and build around it. You get a very laser-focused event with a smaller subset of attendees based on similar interests. It’s a great way to instantly get massive community involvement around an idea. Maybe it’s Monitorama. Or perhaps it’s OSCon. Or even GopherCon. You can see how these smaller communities are united around a singular subject and have great buy in.

However, the critical mass needed to make a boutique conference happen is much greater per person. Cisco Live and VMworld are going to happen every year. There are no less than 10,000 – 15,000 people that would come to either no matter what. Even if 50% of last year’s attendees decided to stay home this year, the conference would happen.

On the flip side, if 50% of the DockerCon or OpenStack Summit attendees stayed home next year, you’d see mass panic in the community. People would start questioning why you’re putting on a show for 2,500 – 3,000 users. It’s one thing to do it when you’re small and just getting started. But to put on a show for those numbers now would be a huge decision point and things would need to be discussed to see what happens going forward.

Cisco Live and VMworld are fun because of their communities. But boutique conferences exist because of their communities. It’s important to realize that and drastic changes in a smaller conference community have huge ripples throughout the conference. Two hundred Twitter users don’t have much impact on the message at Cisco Live. But two hundred angry users at DockerCon can make massive changes happen. Each member of the community is amplified the smaller the conference they attend.


Tom’s Take

Anyone that knows me knows that I love the community. I love seeing them grow and change and develop their own voice. It’s why I work for Tech Field Day. It’s why I go to Cisco Live every year. It’s why I’m happy to speak at VMUnderground events. But I also realize how important the community can be to smaller events. And how quickly things can fall apart when the community is fractured or divided. It’s critical for boutique conferences to harness the power of their communities to get off the ground. But you also have to recognize how important they are to you in the long run. You need to cultivate them and keep the focused on making everything better for everyone.

DockerCon Thoughts – Secure, Sufficient Applications

containerssuspended

I got to spend a couple of days this week at DockerCon and learn a bit more about software containers. I’d always assumed that containers were a slightly different form of virtualization, but thankfully I’ve learned my lesson there. What I did find out about containers gives me a bit of hope about the future of applications and security.

Minimum Viable App

One of the things that made me excited about Docker is that the process isolation idea behind building a container to do one thing has fascinating ramifications for application developers. In the past, we’ve spent out time building servers to do things. We build hardware, boot it with an operating system, and then we install the applications or the components thereof. When we started to virtualize hardware into VMs, the natural progression was to take the hardware resource and turn it into a VM. Thanks to tools that would migrate a physical resource to a virtual one in a single step, most of the first generation VMs were just physical copies of servers. Right down to phantom drivers in the Windows Device Manager.

As we started building infrastructure around the idea of virtualization, we stopped migrating physical boxes and started building completely virtual systems from the ground up. That meant using things like deployment templates, linked clones, and other constructs that couldn’t be done in hardware alone. As time has rolled on, we have a method of quickly deploying virtual resources that we could never do on purely physical devices. We finally figured out how to use virtual platforms efficiently.

Containers are now at the crossroads we saw early on in virtualization. As explained by Mike Coleman (@MikeGColeman), many application developers are starting their container journey by taking an existing app and importing it directly into a container. It’s a bit more involved than the preferred method, but Mike mentioned that even running the entire resource pool in a container does have some advantages. I’m sure the Docker people see container adoption as the first step toward increased market share. Even if it’s a bit clumsy at the start.

The idea then moves toward the breakdown of containers into the necessary pieces, much as it did with virtual machines years ago. Instead of being forced to think about software as a monolithic construct that has to live on a minimum of one operating system, developers can break the system down into application pieces that can execute one program or thread at a time on a container. Applications can be built using the minimum amount of software constructs needed for an individual process. That means that those processes can be spread out and scaled up or down as needed to accomplish goals.

If your database query function is running as a containerized process instead of running on a query platform in a VM then scaling that query to thousands or tens of thousands of instances only requires spinning up new containers instead of new VMs. Likewise, scaling a web-based app to accommodate new users can be accomplished with an explosion of new containers to meet the need. And when the demand dies back down again, the containers can be destroyed and resources returned to the available pool or turned off to save costs.

Segment Isolation

The other exciting thing I saw with containers was the opportunity for security. The new buzzword of the day in security and networking is microsegmentation. VMware is selling it heavily with NSX. Cisco has countered with a similar function in ACI. At the heart of things microsegmentation is simply ensuring that processes that shouldn’t be talking to each other won’t be talking to each other. This prevents exposure by having your app database server visible on the public Internet, for instance.

Microsegmentation is great in overlay and network virtualization systems where we have to take steps to prevent systems from talking to each other. That means policies and safeguards in place to prevent communications. It’s a great way to work on existing networks where the default mode is to let everything on the same subnet talk to everything else. But what if the default was something different.

With containers, there is a sandbox environment for each container to talk to other containers in the same area. If you create a named container network and attach a container to it, that container gains a network interface on that particular named network. It won’t be able to talk to other containers on different networks without creating an explicit connection between the two networks. That means that the default mode of communications for the containers is restricted out of the box.

Imagine how nice it will be to create a network that isn’t insecure by default. Rather than having to unconnected all the things that shouldn’t speak, you can spend your time building connections between the things that should be speaking. That means a little mistake or forgotten connection will prevent communications instead of opening it up. That means much less likelihood that you’re going to cause an incident.

There are still some issues with scaling the networking aspect of Docker right now. The key/value store doesn’t provide a lot of visibility and definitely won’t scale up to tens or hundreds of thousands of connections. My hope is that down the road Docker will implement a more visible solution that can perform drag-and-drop connectivity between containers and leave an audit trail so networking pros can figure out who connected what and how that exposed everything. It also makes it much easier when the connection between the devices has to be explicit to prove intent or malice. But those features are likely to come down the road as Docker builds a bigger, better management platform.


Tom’s Take

I think Docker is doing things right. By making developers look at the core pieces they need to build apps and justify why things are being done the way they’ve always been done, containers are allowing for flexibility and new choices to be made. At the same time, those choices are inherently more secure because resources are only shared when necessary. It’s the natural outgrowth of sandboxing and Jails in the OS from so many years ago. Docker has a chance to make application developers better without making them carry the baggage of years of thinking along with them to a new technology.