DevOps and the Infrastructure Dumpster Fire

dumpsterfire2

We had a rousing discussion about DevOps at Cloud Field Day this week. The delegates talked about how DevOps was totally a thing and it was the way to go. Being the infrastructure guy, I had to take a bit of umbrage at their conclusions and go on a bit of a crusade myself to defend infrastructure from the predations of developers.

Stable, Boy

DevOps folks want to talk about continuous integration and continuous deployment (CI/CD) all the time. They want the freedom to make changes as needed to increase bandwidth, provision ports, and rearrange things to fit development timelines and such. It’s great that they have they thoughts and feelings about how responsive the network should be to their whims, but the truth of infrastructure today is that it’s on the verge of collapse every day of the week.

Networking is often a “best effort” type of configuration. We monkey around with something until it works, then roll it into production and hope it holds. As we keep building more patches on to of patches or try to implement new features that require something to be disabled or bypassed, that creates a house of cards that is only as strong as the first stiff wind. It’s far too easy to cause a network to fall over because of a change in a routing table or a series of bad decisions that aren’t enough to cause chaos unless done together.

Jason Nash (@TheJasonNash) said that DevOps is great because it means communication. Developers are no longer throwing things over the wall for Operations to deal with. The problem is that the boulder they were historically throwing over in the form of monolithic patches that caused downtime was replaced by the storm of arrows blotting out the sun. Each individual change isn’t enough to cause disaster, but three hundred of them together can cause massive issues.

arrows-blot-out-the-sun-800x500

Networks are rarely stable. Yes, routing tables are mostly stabilized so long as no one starts withdrawing routes. Layer 2 networks are stable only up to a certain size. The more complexity you pile on networks, the more fragile they become. The network really only is one fat-fingered VLAN definition or VTP server mode foul up away from coming down around our ears. That’s not a system that can support massive automation and orchestration. Why?

The Utility of Stupid Networking

The network is a source of pain not because of finicky hardware, but because of applications and their developers. When software is written, we have to make it work. If that means reconfiguring the network to suit the application, so be it. Networking pros have been dealing with crap like this for decades. Want proof?

  1. Applications can’t take to multiple gateways at a time on layer 2 networks. So lets create a protocol to make two gateways operate as one with a fake MAC address to answer requests and ensure uptime. That’s how we got HSRP.
  2. Applications can’t survive having the IP address of the server changed. Instead of using so many other good ideas, we create vMotion to allow us to keep a server on the same layer 2 network and change the MAC <-> IP binding. vMotion and the layer 2 DCI issues that it has caused has kept networking in the dark for the last eight years.
  3. Applications that run don’t need to be rewritten to work in the cloud. People want to port them as-is to save money. So cloud networking configurations are a nightmare because we have to support protocols that shouldn’t even be used for the sake of legacy application support.

This list could go on, but all these examples point to one truth: The application developers have relied upon the network to solve their problems for years. So the network is unstable because it’s being extended beyond the use case. Newer applications, like Netflix and Facebook, and thrive in the cloud because they were written from the ground up to avoid using layer 2 DCI or even operate at layer 2 beyond the minimum amount necessary. They solve tricky problems like multi host combinations and failover in the app instead of relying on protocols from the golden age of networking to fix it quietly behind the scenes.

The network needs to evolve past being a science project for protocols that aim to fix stupid application programming decisions. Instead, the network needs to evolve with an eye toward stability and reduced functionality to get there. Take away the ability to even try to do those stupid tricks and what you’re left with is a utility that is a resource for your developers. They can use it for transport without worrying about it crashing every day with some bug in a protocol no one has used in five years yet was still installed just in case someone turned on an old server accidentally.

Nowhere is this more apparent than cloud networking stacks like AWS or Microsoft Azure. There, the networking is as simplistic as possible. The burden for advanced functionality per group of users isn’t pushed into a realm where network admins need to risk outages to fix a busted application. Instead, the app developers can use the networking resources in a basic way to encourage them to think about failover and resiliency in a new way. It’s a brave new world!


Tom’s Take

I’ll admit that DevOps has potential. It gets the teams talking and helps analyze build processes and create more agile applications. But in order for DevOps to work the way it should, it’s going to need a stable platform to launch from. That means networking has to get its act together and remove the unnecessary things that can cause bad interactions. This was caused in part by application developers taking the easy road and pushing against the networking team of wizards. When those wizards push back and offer reduced capabilities countered against more uptime and fewer issues you should start to see app developers coming around to work with the infrastructure teams to get things done. And that is the best way to avoid an embarrassing situation that involves fire.

Cloud Apps And Pathways

jam

Applications are king. Forget all the things you do to ensure proper routing in your data center. Forget the tweaks for OSPF sub-second failover or BGP optimal path selection. None of it matters to your users. If their login to Seibel or Salesforce or Netflix is slow today, you’ve failed. They are very vocal when it comes to telling you how much the network sucks today. How do we fix this?

Pathways Aren’t Perfect

The first problem is the cloud focus of applications. Once our packets leave our border routers it’s a giant game of chance as to how things are going to work next. The routing protocol games that govern the Internet are tried and true and straight out of RFC 1771(Yes, RFC 4271 supersedes it). BGP is a great tool with general purpose abilities. It’s becoming the choice for web scale applications like LinkedIn and Facebook. But it’s problematic for Internet routing. It scales well but doesn’t have the ability to make rapid decisions.

The stability of BGP is also the reason why it doesn’t react well to changes. In the old days, links could go up and down quickly. BGP was designed to avoid issues with link flaps. But today’s links are less likely to flap and more likely to need traffic moved around because of congestion or other factors. The pace that applications need to move traffic flows means that they tend to fight BGP instead of being relieved that it’s not slinging their traffic across different links.

BGP can be a good suggestion of path variables. That’s how Facebook uses it for global routing. But decisions need to be made on top of BGP much faster. That’s why cloud providers don’t rely on it beyond basic connectivity. Things like load balancers and other devices make up for this as best they can, but they are also points of failure in the network and have scalability limitations. So what can we do? How can we build something that can figure out how to make applications run better without the need to replace the entire routing infrastructure of the Internet?

GPS For Routing

One of the things that has some potential for fixing inefficiency with BGP and other basic routing protocols was highlighted during Networking Field Day 12 during the presentation from Teridion. They have a method for creating more efficiency between endpoints thanks to their agents. Founder Elad Rave explains more here:

I like the idea of getting “traffic conditions” from endpoints to avoid congestion. For users of cloud applications, those conditions are largely unknown. Even multipath routing confuses tried-and-true troubleshooting like traceroute. What needs to happen is a way to collect the data for congestion and other inputs and make faster decisions that aren’t beholden to the underlying routing structure.

Overlay networking has tried to do this for a while now. Build something that can take more than basic input and make decisions on that data. But overlays have issues with scaling, especially past the boundary of the enterprise network. Teridion has potential to help influence routing decisions in networks outside your control. Sadly, even the fastest enterprise network in the world is only as fast as an overloaded link between two level 3 interconnects on the way to a cloud application.

Teridion has the right idea here. Alternate pathways need to be identified and utilized. But that data needs to be evaluated and updated regularly. Much like the issues with Waze dumping traffic into residential neighborhoods when major arteries get congested, traffic monitors could cause overloads on alternate links if shifts happen unexpectedly.

The other reason why I like Teridion is because they are doing things without hardware boxes or the need to install software anywhere but the end host. Anyone working with cloud-based applications knows that the provider is very unlikely to provide anything outside of their standard offerings for you. And even if they manage, there is going to be a huge price tag. More often than not, that feature request will become a selling point for a new service in time that may be of marginal benefit until everyone starts using it. Then application performance goes down again. Since Teridion is optimizing communications between hosts it’s a win for everyone.


Tom’s Take

I think Teridion is on to something here. Crowdsourcing is the best way to gather information about traffic. Giving packets a better destination with shorter travel times means better application performance. Better performance means happier users. Happier users means more time spent solving other problems that have symptoms that aren’t “It’s slow” or “Your network sucks”. And that makes everyone happier. Even grumpy old network engineers.

Disclaimer

Teridion was a presenter during Networking Field Day 12 in San Francisco, CA. As a participant in Networking Field Day 12, my travel and lodging expenses were covered by Tech Field Day for the duration of the event. Teridion did not ask for nor where they promised any kind of consideration in the writing of this post. My conclusions here represent my thoughts and opinions about them and are mine and mine alone.

 

AI, Machine Learning, and The Hitchhiker’s Guide

Deep_Thought

I had a great conversation with Ed Horley (@EHorley) and Patrick Hubbard (@FerventGeek) last night around new technologies. We were waxing intellectual about all things related to advances in analytics and intelligence. There’s been more than a few questions here at VMworld 2016 about the roles that machine learning and artificial intelligence will play in the future of IT. But during the conversation with Ed and Patrick, I finally hit on the perfect analogy for machine learning and artificial intelligence (AI). It’s pretty easy to follow along, so don’t panic.

The Answer

Machine learning is an amazing technology. It can extrapolate patterns in large data sets and provide insight from seemingly random things. It can also teach machines to think about problems and find solutions. Rather than go back to the tired Target big data example, I much prefer this example of a computer learning to play Super Mario World:

You can see how the algorithms learn how to play the game and find newer, better paths throughout the level. One of the things that’s always struck me about the computer’s decision skills is how early it learned that spin jumps provide more benefit than regular jumps for a given input. You can see the point in the video when this is figured out by the system, whereafter all jumps become spinning for maximum effect.

Machine learning appears to be insightful and amazing. But the weakness of machine learning is be exemplified by Deep Thought from The Hitchhiker’s Guide to the Galaxy. Deep Thought was created to find the answer to the ultimate question of life, the universe, and everything. It was programmed with an enormous dataset – literally every piece of knowledge in the known universe. After seven million years, it finally produces The Answer (42, if you’re curious). Which leads to the plot of the book and other hijinks.

Machine learning is capable of great leaps of logic, but it operates on a fundamental truth: all inputs are a bounded data set. Whether you are performing a simple test on a small data set or combing all the information in the universe for answers you are still operating on finite information. Machine learning can do things very fast and find pieces of data that are not immediately obvious. But it can’t operate outside the bounds of the data set without additional input. Even the largest analytics clusters won’t produce additional output without more data being ingested. Machine learning is capable of doing amazing things. But it won’t ever create new information outside of what it is operating on.

The Question

Artificial Intelligence (AI), on the other hand, is more like the question in The Hitchhiker’s Guide. Deep Thought admonishes the users of the system that rather than looking for the answer to Life, The Universe, and Everything, they should have been looking for The Question instead. That involves creating a completely different computer to find the Question that matches the Answer that Deep Thought has already provided.

AI can provide insight above and beyond a given data set input. It can provide context where none exists. It can make leaps of logic similarly to those that humans are capable of doing. AI doesn’t simply stop when it faces an incomplete data set. Even though we are seeing AI in infancy today, the most advanced systems are capable of “filling in the blanks” to cover missing information. As the algorithms learn more and more how to extrapolate they’ll become better at making incomplete decisions.

The reason why computers are so good at making quick decisions is because they don’t operate outside the bounds of the possible. If the entire universe for a decision is a data set, they won’t try to look around that. That ability to look beyond and try to create new data where none exists is the hallmark of intelligence. Using tools to create is a uniquely biologic function. Computers can create subsets of data with tools but they can’t do a complete transformation.

AI is pushing those boundaries. Given enough time and the proper input, AI can make the leaps outside of bounds to come up with new ideas. Today it’s all about context. Tomorrow may find AI providing true creativity. AI will eventually pass a Turing Test because it can look outside the script and provide the pseudorandom type of conversation that people are capable of.


Tom’s Take

Computers are smart. They think faster than we do. They can do math better than we can. They can produce results in a fraction of the time at a scale that boggles the mind. But machine learning is still working from a known set. No matter how much work we pour into that aspect of things, we are still going to hit a limit of the ability of the system to break out of it’s bounds.

True AI will behave like we do. It will look for answers when their are none. It will imagine when confronted with the impossible. It will learn from mistakes and create solutions to them. That’s the power of intelligence versus learning. Even the most power computer in the universe couldn’t break out of its programming. It needed something more to question the answers it created. Just like us, it needed to sit back and let its mind wander.

Cisco vs. Arista: Shades of Gray

CiscoVArista

Yesterday was D-Day for Arista in their fight with Cisco over the SysDB patent. I’ve covered this a bit for Network Computing in the past, but I wanted to cover some new things here and put a bit more opinion into my thoughts.

Cisco Designates The Competition

As the great Stephen Foskett (@SFoskett) says, you always have to punch above your weight. When you are a large company, any attempt to pick on the “little guy” looks bad. When you’re at the top of the market it’s even tougher. If you attempt to fight back against anyone you’re going to legitimize them in the eye of everyone else wanting to take a shot at you.

Cisco has effectively designated Arista as their number one competitor by way of this lawsuit. Arista represents a larger threat that HPE, Brocade, or Juniper. Yes, I agree that it is easy to argue that the infringement constituted a material problem to their business. But at the same time, Cisco very publicly just said that Arista is causing a problem for Cisco. Enough of a problem that Cisco is going to take them to court. Not make Arista license the patent. That’s telling.

Also, Cisco’s route of going through the ITC looks curious. Why not try to get damages in court instead of making the ITC ban them from importing devices? I thought about this for a while and realized that even if there was a court case pending it wouldn’t work to Cisco’s advantage in the short term. Cisco doesn’t just want to prove that Arista is copying patents. They want to hurt Arista. That’s why they don’t want to file an injunction to get the switches banned. That could take years and involve lots of appeals. Instead, the ITC can just simply say “You cannot bring these devices into the country”, which effectively bans them.

Cisco has gotten what it wants short term: Arista is going to have to make changes to SysDB to get around the patent. They are going to have to ramp up domestic production of devices to get around the import ban. Their train of development is disrupted. And Cisco’s general counsel gets to write lots of posts about how they won.

Yet, even if Arista did blatantly copy the SysDB stuff and run with it, now Cisco looks like the 800-pound gorilla stomping on the little guy through the courts. Not by making better products. Not by innovating and making something that eclipses the need for software like this. No, Cisco won by playing the playground game of “You stole my idea and I’m going to tell!!!”

Arista’s Three-Edged Sword

Arista isn’t exactly coming out of this on the moral high ground either. Arista has gotten a black eye from a lot of the quotes being presented as evidence in this case. Ken Duda said that Arista “slavishly copied” Cisco’s CLI. There have been other comments about “secret sauce” and the way that SysDB is used. A few have mentioned to me privately that the copying done by Arista was pretty blatant.

Understanding is a three-edged sword: Your side, their side, and the truth.

Arista’s side is that they didn’t copy anything important. Cisco’s side is that EOS has enough things that have been copied that it should be shut down and burned to the ground. The truth lies somewhere in the middle of it all.

Arista didn’t copy everything from IOS. They hired people who worked on IOS and likely saw things they’d like to implement. Those people took ideas and ran with them to come up with a better solution. Those ideas may or may not have come from things that were worked on at Cisco. But if you hire a bunch of employees from a competitor, how do you ensure that their ideas aren’t coming from something they were supposed to have “forgotten”?

Arista most likely did what any other company in that situation would do: they gambled. Maybe SysDB was more copied that created. But so long as Arista made money and didn’t really become a blip on Cisco radar. That’s telling. Listen to this video, which starts at 4:40 and goes to about 6:40:

Doug Gourlay said something that has stuck with me for the last four years: “Everyone that ever set out to compete against Cisco and said, ‘We’re going to do it and be everything to everyone’ has failed. Utterly.”

Arista knew exactly which market they wanted to attack: 10Gig and 40Gig data center switches. They made the best switch they could with the best software they could and attacked that market with all the force they could muster. But, the gamble would eventually have to either pay off or come due. Arista had to know at some point that a strategy shift would bring them under the crosshairs of Cisco. And Cisco doesn’t forgive if you take what’s theirs. Even if, and I’m quoting from both a Cisco 10-K from 1996 and a 2014 Annual Report:

[It is] not economically practical or even possible to determine in advance whether a product or any of its components infringes or will infringe on the patent rights of others.

So Arista built the best switch they could with the knowledge that some of their software may not have been 100% clean. Maybe they had plans to clean it up later. Or iterate it out of existence. Who knows? Now, Arista has to face up to that choice and make some new ones to keep selling their products. Whether or not they intended to fight the 800-pound gorilla of networking at the start, they certainly stumbled into a fight here.


Tom’s Take

I’m not a lawyer. I don’t even pretend to be one. I do know that the fate of a technology company now rests in the hands of non-technical people that are very good and wringing nuance out of words. Tech people would look at this and shake their heads. Did Arista copy something? Probably? Was is something Cisco wanted copied? Probably not? Should Cisco have unloaded the legal equivalent of a thermonuclear warhead on them? Doubtful.

Cisco is punishing Arista to ensure no one every copies their ideas again. As I said before, the outcome of this case will doom the Command Line Interface. No one is going to want to tangle with Cisco again. Which also means that no one is going to want to develop things along the Cisco way again. Which means Cisco is going to be less relevant in the minds of networking engineers as REST APIs and other programming architectures become more important that remembering to type conf t every time.

Arista will survive. They will make changes that mean their switches will live on for customers. Cisco will survive. They get to blare their trumpets and tell the whole world they vanquished an unworthy foe. But the battle isn’t over yet. And instead of it being fought over patents, it’s going to be fought as the industry moves away from CLI and toward a model that doesn’t favor those who don’t innovate.

Repeat After Me

callandresponse

Thanks to Tech Field Day, I fly a lot. As Southwest is my airline of choice and I have status, I tend to find myself sitting the slightly more comfortable exit row seating. One of the things that any air passenger sitting in the exit row knows by heart is the exit row briefing. You must listen to the flight attendant brief you on the exit door operation and the evacuation plan. You are also required to answer with a verbal acknowledgment.

I know that verbal acknowledgment is a federal law. I’ve also seen some people blatantly disregard the need to verbal accept responsibility for their seating choice, leading to some hilarious stories. But it also made me think about why making people talk to you is the best way to make them understand what you’re saying

Sotto Voce

Today’s society full of distractions from auto-play videos on Facebook to Pokemon Go parks at midnight is designed to capture the attention span of a human for a few fleeting seconds. Even playing a mini-trailer before a movie trailer is designed to capture someone’s attention for a moment. That’s fine in a world where distraction is assumed and people try to multitask several different streams of information at once.

People are also competing for noise attention as well. Pleasant voices are everywhere trying to sell us things. High volume voices are trying to cut through the din to sell us even more things. People walk around the headphones in most of the day. Those that don’t pollute what’s left with cute videos that sound like they were recorded in an echo chamber.

This has also led to a larger amount of non-verbal behavior being misinterpreted. I’ve experienced this myself on many occasions. People distracted by a song on their phone or thinking about lyrics in their mind may nod or shake their head in rhythm. If you ask them a question just before the “good part” and they don’t hear you clearly, they may appear to agree or disagree even though they don’t know what they just heard.

Even worse is when you ask someone to do something for you and they agree only to turn around and ask, “What was it you wanted again?” or “Sorry, I didn’t catch that.” It’s become acceptable in society to agree to things without understanding their meaning. This leads to breakdowns in communication and pieces of the job left undone because you assume someone was going to do something when they agreed, yet they agreed and then didn’t understand what they were supposed to do.

Fortississimo!

I’ve found that the most effective way to get someone to understand what you’ve told them is to ask you to repeat it back in their own words. It may sound a bit silly to hear what you just told them, but think about the steps that they must go through:

  • They have to stop for moment and think about what you said.
  • They then have to internalize the concepts so they understand them.
  • They then must repeat back to you those concepts in their own phrasing.

Those three steps mean that the ideas behind what you are stating or asking must be considered for a period of time. It means that the ideas will register and be remembered because they were considered when repeating them back to the speaker.

Think about this in a troubleshooting example. A junior admin is supposed to go down the hall and tell you when a link light comes for port 38. If the admin just nods and doesn’t pay attention, ask them to repeat those instructions back. The admin will need to remember that port 38 is the right port and that they need to wait until the link light is on before saying something. It’s only two pieces of information, but it does require thought and timing. By making the admin repeat the instructions, you make sure they have them down right.


Tom’s Take

Think about all the times recently when someone has repeated something back to you. A food order or an amount of money given to you to pay for something. Perhaps it was a long list of items to accomplish for an event or a task. Repetition is important to internalize things. It builds neural pathways that force the information into longer-term memory. That’s why a couple of seconds of repetition are time well invested.

Networking Needs Information, Not Data

GameAfoot

Networking Field Day 12 starts today. There are a lot of great presenters lined up. As I talk to more and more networking companies, it’s becoming obvious that simply moving packets is not the way to go now. Instead, the real sizzle is in telling you all about those packets instead. Not packet inspection but analytics.

Tell Me More, Tell Me More

Ask any networking professional and they’ll tell you that the systems they manage have a wealth of information. SNMP can give you monitoring data for a set of points defined in database files. Other protocols like NetFlow or sFlow can give you more granular data about a particular packet group of data flow in your network. Even more advanced projects like Intel’s Snap are building on the idea of using telemetry to collect disparate data sources and build collection methodologies to do something with them.

The concern that becomes quickly apparent is the overwhelming amount of data being received from all these sources. It reminds me a bit of this scene:

How can you drink from this firehose? Maybe you should be asking if you should instead?

Order From Chaos

Data is useless. We need to perform analysis on it to get information. That’s where a new wave of companies is coming into the networking market. They are building on the frameworks and systems that are aggregating data and presenting it in a way that makes it useful information. Instead of random data points about NetFlow, these solutions tell you that you’ve got a huge problem with outbound traffic of a specific type that is sent at a specific time with a specific payload. The difference is that instead of sorting through data to make sense of it, you’ve got a tool delivering the analysis instead of the raw data.

Sometimes it’s as simple as color-coding lines of Wireshark captures. Resets are bad, so they show up red. Properly torn down connections are good so they are green. You can instantly figure out how good things are going by looking for the colors. That’s analysis from raw data. The real trick in modern networking monitoring is to find a way to analyze and provide context for massive amounts of data that may not have an immediate correlation.

Networking professionals are smart people. They can intuit a lot of potential issues from a given data set. They can make the logical leap to a specific issue given time. What reduces that ability is the sheer amount of things that can go wrong with a particular system and the speed at which those problems must be fixed, especially at scale. A hiccup on one end of the network can be catastrophic on the others if allowed to persist.

Analytics can give us the context we need. It can provide confidence levels for common problems. It can ensure that symptoms are indeed happening above a given baseline or threshold. It can help us narrow the symptoms and potential issues before we even look at the data. Analytics can exclude the impossible while highlighting the more probably causes and outcomes. Analytics can give us peace of mind.


Tom’s Take

Analytics isn’t doing our job for us. Instead, it’s giving us the ability to concentrate. Anyone that spends their time sifting through data to try and find patterns is losing the signal in the noise. Patterns are things that software can find easily. We need to leverage the work being put into network analytics systems to help us track down the issues before they blow up into full problems. We need to apply the thing that makes network professionals the best suited to look at the best information we can gather about a situation. Our concentration on what matters is where our job will be in five years. Let’s take the knowledge we have and apply it.

Ten Years of Cisco Live – Community Matters Most of All

CLUS2016SignPic

Hey! I made the sign pic this year!

I’ve had a week to get over my Cisco Live hangover this year. I’ve been going to Cisco Live for ten years and been involved in the social community for five of them. And I couldn’t be prouder of what I’ve seen. As the picture above shows, the community is growing by leaps and bounds.

People Are What Matter

TomsCornerSelfie

I was asked many, many times about Tom’s Corner. What was it? Why was it important? Did you really start it? The real answer is that I’m a bit curious. I want to meet people. I want to talk to them and learn their stories. I want to understand what drives people to learn about networking or wireless or fax machines. Talking to a person is one of the best parts of my job, whether it be my Bruce Wayne day job or my Batman night job.

Social media helps us all stay in touch when we aren’t face-to-face, but meeting people in real life is as important too. You know who likes to hug. You find out who tells good stories. Little things matter like finding out how tall someone is in real life. You don’t get that unless you find a way to meet them in person.

FishHug

Hugging Denise Fishburne

Technology changes every day. We change from hardware to software and back again. Routers give way to switches. Fabrics rise. Analytics tell all. But all this technology still has people behind it. Those people make the difference. People learn and grow and change. They figure out how to make SDN work today after learning ISDN and Frame Relay yesterday. They have the power to expand beyond their station and be truly amazing.

Conferences Are Still King

Cisco Live is huge. Almost 30,000 attendees this year. The Mandalay Bay Convention Center was packed to the gills. The World of Solutions took up two entire halls this year. The number of folks coming to the event keeps going up every year. The networking world has turned this show into the biggest thing going on. Just like VMworld, it’s become synonymous with the industry.

People have a desire to learn. They want to know things. They want high quality introductions to content and deep dives into things they want to know inside and out. So long as those sessions are offered at conferences like Cisco Live and Interop people will continue to flock to them. For the shows that assemble content from the community this is an easy proposition. People are going to want to talk where others are willing to listen. For single sourced talks like Cisco Live, it’s very important to identify great speakers like Denise Fishburne (@DeniseFishburne) and Peter Jones (@PeterGJones) and find ways to get them involved. It’s also crucial to listen to feedback from attendees about what did work and what they want to see more of in the coming years.

Keeping The Community Growing

CLUS2016Tweetup

One thing that I’m most proud of is seeing the community grow and grow. I love seeing new faces come in and join the group. This year had people from many different social circles taking part in the Cisco Live community. Reddit’s /r/networking group was there. Kilted Monday happened. Engineering Deathmatches happened. Everywhere you looked, communities were doing great things.

As great as it was to see so many people coming together, it’s just as important to understand that we have to keep the momentum going. Networking doesn’t keep rolling along without new ideas and new people expressing them. Four years ago I could never have guessed the impact that Matt Oswalt (@Mierdin) and Jason Edelman (@JEdelman8) could have had on the networking community. They didn’t start out on top of the world. They fought their way up with new ideas and perspectives. The community adopted what they had to say and ran with it.

We need to keep that going. Not just at Cisco Live either. We need to identify the people doing great things and shining a spotlight on them. Thankfully, my day job affords me an opportunity to do just that. But the whole community needs to be doing it as well. If you can just find one person to tell the world about it’s a win for all of us. Convince a friend to write a blog post. Make a co-worker join Twitter. In the end every new voice is a chance for us all to learn something.


Tom’s Take

As Dennis Leary said in Demolition Man,

I’m no leader. I do what I have to do. Sometimes people come with me.

That’s what Cisco Live is to me. It’s not about a corner or a table or a suite at an event. It’s about people coming together to do things. People talking about work and having a good time. The last five years of Cisco Live have been some of the happiest of my life. More than any other event, I look forward to seeing the community and catching up with old friends. I am thankful to have a job that allows me to go to the event. I’m grateful for a community full of wonderful people that are some of the best and brightest at what they do. For me, Cisco Live is about each of you. The learning and access to Cisco is a huge benefit. But I would go for the people time and time and time again. Thanks for making the fifth year of this community something special to me.