Unknown's avatar

About networkingnerd

Tom Hollingsworth, CCIE #29213, is a former network engineer and current organizer for Tech Field Day. Tom has been in the IT industry since 2002, and has been a nerd since he first drew breath.

BGP: The Application Networking Dream

bgp

There was an interesting article last week from Fastly talking about using BGP to scale their network. This was but the latest in a long line of discussions around using BGP as a transport protocol between areas of the data center, even down to the Top-of-Rack (ToR) switch level. LinkedIn made a huge splash with it a few months ago with their Project Altair solution. Now it seems company after company is racing to implement BGP as the solution to their transport woes. And all because developers have finally pulled their heads out of the sand.

BGP Under Every Rock And Tree

BGP is a very scalable protocol. It’s used the world over to exchange routes and keep the Internet running smoothly. But it has other power as well. It can be extended to operate in other ways beyond the original specification. Unlike rigid protocols like RIP or OSPF, BGP was designed in part to be extended and expanded as needs changes. IS-IS is a very similar protocol in that respect. It can be upgraded and adjusted to work with both old and new systems at the same time. Both can be extended without the need to change protocol versions midstream or introduce segmented systems that would run like ships in the night.

This isn’t the first time that someone has talked about running BGP to the ToR switch either. Facebook mentioned in this video almost three years ago. Back then they were solving some interesting issues in their own data center. Now, those changes from the hyperscale world are filtering into the real world. Networking teams are seeking to solve scaling issues without resorting to overlay networks or other types of workarounds. The desire to fix everything wrong with layer 2 has led to a revelation of sorts. The real reason why BGP is able to work so well as a replacement for layer 2 isn’t because we’ve solved some mystical networking conundrum. It’s because we finally figured out how to build applications that don’t break because of the network.

Apps As Far As The Eye Can See

The whole reason when layer 2 networks are the primary unit of data center measurement has absolutely nothing to do with VMware. VMware vMotion behaves the way that it does because legacy applications hate having their addresses changed during communications. Most networking professionals know that MAC addresses have a tenuous association to IP addresses, which is what allows the gratuitous ARP after a vMotion to work so well. But when you try to move an application across a layer 3 boundary, it never ends well.

When web scale companies started building their application stacks, they quickly realized that being pinned to a particular IP address was a recipe for disaster. Even typical DNS-based load balancing only seeks to distribute requests to a series of IP addresses behind some kind of application delivery controller. With legacy apps, you can’t load balance once a particular host has resolved a DNS name to an IP address. Once the gateway of the data center resolves that IP address to a MAC address, you’re pinned to that device until something upsets the balance.

Web scale apps like those built by Netflix or Facebook don’t operate by these rules. They have been built to be resilient from inception. Web scale apps don’t wait for next hop resolution protocols (NHRP) or kludgy load balancing mechanisms to fix their problems. They are built to do that themselves. When problems occur, the applications look around and find a way to reroute traffic. No crazy ARP tricks. No sly DNS. Just software taking care of itself.

The implications for network protocols are legion. If a web scale application can survive a layer 3 communications issue then we are no longer required to keep the entire data center as a layer 2 construct. If things like anycast can be used to pin geolocations closer to content that means we don’t need to worry about large failover domains. Just like Ivan Pepelnjak (@IOSHints) says in this post, you can build layer 3 failure domains that just work better.

BGP can work as your ToR strategy for route learning and path selection because you aren’t limited to forcing applications to communicate at layer 2. And other protocols that were created to fix limitations in layer 2, like TRILL or VXLAN, become an afterthought. Now, applications can talk to each other and fail back and forth as they need to without the need to worry about layer 2 doing anything other than what it was designed to do: link endpoints to devices designed to get traffic off the local network and into the wider world.


Tom’s Take

One of the things that SDN has promised us is a better way to network. I believe that the promise of making things better and easier is a noble goal. But the part that has bothered me since the beginning was that we’re still trying to solve everyone’s problems with the network. We don’t rearrange the power grid every time someone builds a better electrical device. We don’t replumb the house overtime we install a new sink. We find a way to make the new thing work with our old system.

That’s why the promise of using BGP as a ToR protocol is so exciting. It has very little to do with networking as we know it. Instead of trying to work miracles in the underlay, we build the best network we know how to build. And we let the developers and programmers do the rest.

The Death of TRILL

wasteland_large

Networking has come a long way in the last few years. We’ve realized that hardware and ASICs aren’t the constant that we could rely on to make decisions in the next three to five years. We’ve thrown in with software and the quick development cycles that allow us to iterate and roll out new features weekly or even daily. But the hardware versus software battle has played out a little differently than we all expected. And the primary casualty of that battle was TRILL.

Symbiotic Relationship

Transparent Interconnection of Lots of Links (TRILL) was proposed as a solution to the complexity of spanning tree. Radia Perlman realized that her bridging loop solution wouldn’t scale in modern networks. So she worked with the IEEE to solve the problem with TRILL. We also received Shortest Path Bridging (SPB) along the way as an alternative solution to the layer 2 issues with spanning tree. The motive was sound, but the industry has rejected the premise entirely.

Large layer 2 networks have all kinds of issues. ARP traffic, broadcast amplification, and many other numerous issues plague layer 2 when it tries to scale to multiple hundreds or a few thousand nodes. The general rule of thumb is that layer 2 broadcast networks should never get larger than 250-500 nodes lest problems start occurring. And in theory that works rather well. But in practice we have issues at the software level.

Applications are inherently complicated. Software written in the pre-Netflix era of public cloud adoption doesn’t like it when the underlay changes. So things like IP addresses and ARP entries were assumed to be static. If those data points change you have chaos in the software. That’s why we have vMotion.

At the core, vMotion is a way for software to mitigate hardware instability. As I outlined previously, we’ve been fixing hardware with software for a while now. vMotion could ensure that applications behaved properly when they needed to be moved to a different server or even a different data center. But they also required the network to be flat to overcome limitations in things like ARP or IP. And so we went on a merry journey of making data centers as flat as possible.

The problem came when we realized that data centers could only be so flat before they collapsed in on themselves. ARP and spanning tree limited the amount of traffic in layer 2 and those limits were impossible to overcome. Loops had to be prevented, yet the simplest solution disabled bandwidth needed to make things run smoothly. That caused IEEE and IETF to come up with their layer 2 solutions that used CLNS to solve loops. And it was a great idea in theory.

The Joining

In reality, hardware can’t be spun that fast. TRILL was used as a reference platform for proprietary protocols like FabricPath and VCS. All the important things were there but they were locked into hardware that couldn’t be easily integrated into other solutions. We found ourselves solving problem after problem in hardware.

Users became fed up. They started exploring other options. They finally decided that hardware wasn’t the answer. And so they looked to software. And that’s where we started seeing the emergence of overlay networking. Protocols like VXLAN and NV-GRE emerged to tunnel layer 2 packets over layer 3 networks. As Ivan Pepelnjak is fond of saying layer 3 transport solves all of the issues with scaling. And even the most unruly application behaves when it thinks everything is running on layer 2.

Protocols like VXLAN solved an immediate need. They removed limitations in hardware. Tunnels and fabrics used novel software approaches to solve insurmountable hardware problems. An elegant solution for a thorny problem. Now, instead of waiting for a new hardware spin to fix scaling issues, customers could deploy solutions to fix the issues inherent in hardware on their own schedule.

This is the moment where software defined networking (SDN) took hold of the market. Not when words like automation and orchestration started being thrown about. No, SDN became a real thing when it enabled customers to solve problems without buying more physical devices.


Tom’s Take

Looking back, we realize now that building large layer 2 networks wasn’t the best idea. We know that layer 3 scales much better. Given the number of providers and end users running BGP to top-of-rack (ToR) switches, it would seem that layer 3 scales much better. It took us too long to figure out that the best solution to a problem sometimes takes a bit of thought to implement.

Virtualization is always going to be limited by the infrastructure it’s running on. Applications are only as smart as the programmer. But we’ve reached the point where developers aren’t counting on having access to layer 2 protocols that solve stupid decision making. Instead, we have to understand that the most resilient way to fix problems is in the software. Whether that’s VXLAN, NV-GRE, or a real dev team not relying on the network to solve bad design decisions.

Is Interop Dead?

interop_logo_blk

I’m at Interop this week talking all things networking with a great group of people. There are quite a few members of the community here presenting, listening and discussing. There’s a great exchange of ideas flowing back and forth. Yet one thing I keep hearing in quiet corners of the room is a hushed discussion of the continued viability of Interop as a conference. Is it time to write the Interop obituary?

Only Mostly Dead

Some of the arguments are as old as tech itself. People claim that getting vendors to interoperate today is an afterthought thanks to protocols like OSPF. All of the important bits in a network are standardized now. Use of APIs and other open technologies are driving vendors to play nice with each other. The need to show up in a faraway place and do the work has long passed.

There’s also the discussion around the bigger conferences out in the world. Vendor conferences like Cisco Live and VMworld draw tens of thousands. New product announcements are dropping left and right during these events. People also want to fracture into tool-specific events like OpenStack Summit or DockerCon. Or the various analyst events or company days that happen every month. Why have a conference like Interop when others do it bigger?

Lastly, there’s the argument that the idea of an expo floor is long gone. Why should we get a booth on the show floor to show off our solution? Why not just approach people directly? Why spend money to be there like everyone else. Companies that stand out get noticed. Companies that leverage social media and SEO get the business. Not companies that buy space on a floor somewhere.

Exaggerated Rumors

Let’s jump on these one at a time.

First, claiming that all vendors play nice today thanks to standard protocols is misguided at best and downright silly at worst. Just because OSPF is standard doesn’t mean that people aren’t going to bend it to their own needs. Remember totally stubby areas? Those were very Cisco-specific for a long time. And throwing APIs into the discussion muddies the waters even further. Just because you have an API attached to your software doesn’t mean you can interoperate. How well is your API documented? Do I need to buy SDK access to get into it? Is my code portable? Do you do something silly like not supporting Python but loving things like Java and Perl?

Interoperability is a problem that hasn’t been solved completely. Even if everything works on paper and in Powerpoint slides, there’s still an acid test when you plug it all in for real and make it work. That moment of relief when two different vendor’s devices come up with the same protocols and everything talks is a magical time. You can’t replicate that in documentation. There’s still a huge need to have companies show up and physically prove that things work in a neutral setting versus a heavily slanted bakeoff lab somewhere.

Secondly, let’s look at those other conferences. Sure, Cisco Live has 20,000 people. That all want to talk about Cisco. There isn’t a whole lot of room for Juniper, Brocade, or HPE to be there. Differing opinions aren’t welcome. At best they are discussed in hushed tones in the corners of the room (sound familiar?). Mindshare is important, but so are honest discussions. As for smaller conferences focused on specific tools like OpenStack Summit, you quickly see the limits of things when you start talking to attendees about other things like pieces of the stack not addressed by the solution. There’s importance in being able to talk about all the parts without being overly myopic on one part.

The other piece of the other conferences refers back to a conversation that happened on Twitter last month about community content in large vendor conferences. There was talk from a number of people about how vendor conferences freeze out community content because people pay big bucks to come here a Technical Marketing Engineer read slides about configuring a feature. It’s a far cry from the deep discussion and analysis that you get in other places. How do you work around bugs in code? What happens when a feature is missing? Communities solve problems. Big conferences do a bad job of getting community involvement outside of expo floor crawls and keynotes. If they didn’t, we wouldn’t need the amazing work of vBrownBag and Security BSides. Community matters to people.

Thirdly, the expo floor discussion. I’ll admit that I find expo floors to be personally challenging, but to eschew them in favor of a completely social strategy or relying on SEO to pop up on people’s radar is a recipe for disaster. Being able to physically talk to a person is still a valuable part of the process. How do you feel about their approach? Are they happy with the technology? Does it work in the physical demo? Do you get the sense that they are pushing too hard or trying to close you on a sale without giving you what you need. That’s an experience you don’t get via email or DM on Twitter. Whether or not my personal feelings matter, the truth is that the expo does still matter.


Tom’s Take

I’m biased. I’ve loved Interop for a number of years even before I got involved in the programming of it. The idea that people show up to prove definitively that their stuff works with all the other stuff is a gut check like no other. In the last couple of years the planners of the conference have worked hard to make sure that the programming has reflected the kinds of things that people need to be aware of on the horizon and what they need to be learning. Less focus on vendor sales pitches and more on independent content. Lean and mean and ready to fight the rest of the conference world to prove that content is still king.

As long as I’m involved, I can promise that any rumors of Interop’s impending death with stay just that – talk and rumor. There’s still a lot of life left in this grand event to make it matter to people that should matter.

Linux and the Quest for Underlays

TuxUnderlay

I’m at the OpenStack Summit this week and there’s a lot of talk around about building stacks and offering everything needed to get your organization ready for a shift toward service provider models and such. It’s a far cry from the battles over software networking and hardware dominance that I’m so used to seeing in my space. But one thing came to mind that made me think a little harder about architecture and how foundations are important.

Brick By Brick

The foundation for the modern cloud doesn’t live in fancy orchestration software or data modeling. It’s not because a retailer built a self-service system or a search engine giant decided to build a cloud lab. The real reason we have a growing market for cloud providers today is because of Linux. Linux is the underpinning of so much technology today that it’s become nothing short of ubiquitous. Servers are built on it. Mobile operating systems use it. But no one knows that’s what they are using. It’s all just something running under the surface to enable applications to be processed on top.

Linux is the vodka of operating systems. It can run in a stripped down manner on a variety of systems and leave very little trace behind. BSD is similar in this regard but doesn’t have the driver support from manufacturers or the ability to strip out pieces down to the core kernel and few modifications. Linux gives vendors and operators the flexibility to create a software environment that boots and gets basic hardware working. The rest is up to the creativity of the people writing the applications on top.

Linux is the perfect underlay. It’s a foundation that is built upon without getting in the way of things running above it. It gives you predictable performance and a familiar environment. That’s one of the reasons why Cumulus Networks and Dell have embraced Linux as a way to create switch operating systems that get out of the way of packet processing and let you build on top of them as your needs grow and change.

Break The Walls Down

The key to building a good environment is a solid underlay, whether it be be in systems or in networking. With reliable transport and operations taken care of, amazing things can be built. But that doesn’t mean that you need to build a silo around your particular area of organization.

The shift to clouds and stacks and “new” forms of IT management aren’t going to happen if someone has built up a massive blockade. They will work when you build a system that has common parts and themes and allows tools to work easily on multiple parts of the infrastructure.

That’s what’s made Linux such a lightning rod. If your monitoring tools can monitor servers, SANs, and switches with little to no modification you can concentrate your time on building on those pieces instead of writing and rewriting software to get you back to where you started in the first place. That’s how systems can be extensible and handle changes quickly and efficiently. That’s how you build a platform for other things.


Tom’s Take

I like building Lego sets. But I really like building them with the old fashioned basic bricks. Not the fancy new ones from licensed sets. Because the old bricks were only limited by your creativity. You could move them around and put them anywhere because they were all the same. You could build amazing things with the right basic pieces.

Clouds and stacks aren’t all that dissimilar. We need to focus on building underlays of networking and compute systems with the same kinds of basic blocks if we ever hope to have something that we can build upon for the future. You may not be able to influence the design of systems at the most basic level when it comes to vendors and suppliers, but you can vote with your dollars to back the solutions that give you the flexibility to get your job done. I can promise you that when the revenue from proprietary, non-open underlay technologies goes down the suppliers will start asking you the questions you need to answer for them.

Automating Change With Help From Fibonacci

FibonacciShell

A few recent conversations that I’ve seen and had with professionals about automation have been very enlightening. It all started with a post on StackExchange about an unsuspecting user that tried to automate a cleanup process with Ansible and accidentally erased the entire server farm at a service provider. The post was later determined to be a viral marketing hoax but was quite believable to the community because of the power of automation to make bad ideas spread very quickly.

Better The Devil You Know

Everyone in networking has been in a place where they’ve typed in something they shouldn’t have. Whether you removed the management network you were using to access the switch or created an access list that denied packets that locked you out of something. Or perhaps you typed an errant debug command that forced you to drive an hour to reboot a switch that was no longer responding. All of these things seem to happen to people as part of the learning process.

But how many times have we typed something in to create a change and found that it broke more than we expected? Like changing a native VLAN on a trunk and bringing down a link we didn’t intend to affect? These unforeseen accidents are the kinds of problems that can easily be magnified by scripts or automation.

I wrote a post about people blaming tools for SolarWinds a couple of months ago. In it, there was a story about a person that uploaded the wrong switch firmware to a server and used an automated tool to kick off an upgrade of his entire network. Only after the first switch failed to return to normal did he realize that he had downloaded an incorrect firmware to the server. And the command he used to kick off the upgrade was not the safe version of the command that checks for compatibility. Instead, it was the quick version of the command that copied the firmware directly into flash and rebooted the switch without confirmation.

While people are quick to blame tools for making mistakes race through the network quickly it should also be realized that those issues would be mistakes no matter what. Just because a system is capable of being automated doesn’t mean that your commands are exempt from being checked and rechecked. Too often a typo or an added word somewhere in the mix causes unintended chaos because we didn’t take the time to make sure there were no problems ahead of time.

Fibbonaci Style

I’ve always tried to do testing and regression in a controlled manner. For some places that have simulators and test networks to try out changes this method might still work. But for those of us that tend to fly by the seat of our pants on production devices, it’s best to artificially limit the damage before it becomes too great. Note that this method works with automation systems too provided you are controlling the logic behind it.

When you go out to test a network-wide change or perform an upgrade, pick one device as your guinea pig. It shouldn’t be something pushing massive production traffic like a core switch. Something isolated in the corner of the network usually works just fine. Test your change there outside production hours and with a fully documented blackout plan. Once you’ve implemented it, don’t touch anything else for at least a day. This gives the routing tables time to settle down completely and all of the aging timers a chance to expire and tables to recompile. Only then can you be sure that you’re not dealing with cached information.

Now that you’ve done it once, you’re ready to make it live to 10,000 devices, right? Abosolutely not. Now that you’ve proven that the change doesn’t cause your system to implode and take the network with it, you pick another single device and do it again. This time, you either pick a neighbor device to the first one or something on the other side of the network. The other side of the network ensures that changes don’t ripple across between devices over the 24-hour watch period. On the other hand, if the change involves direct connectivity changes between two devices you should test them to be sure that the links stay up. Much easier to recover one failed device than 40 or 400.

Once you follow the same procedure with the second upgrade and get clean results, it’s time to move to doing two devices at once. If you have a fancy automation system like Ansible or Puppet this is where you will be determining that the system is capable of handling two devices at once. If you’re still using scripts, this is where you make sure you’re pasting the right info into each window. Some networks don’t like two devices changing information at the same time. Your routing table shouldn’t be so unstable that a change like this would cause problems but you never know. You will know when you’re done with this.

Now that you’ve proven that you can make changes without cratering a switch, a link, or the entire network all at once, you can continue. Move on to three devices, then five, then eight. You’ll notice that these rollout plans are following the Fibonacci Sequence. This is no accident. Just like the appearance of these numbers in nature and math, having a Fibonacci rollout plan helps you evaluate changes and rollback problems before they grow out of hand. Just because you have the power to change the entire network at once doesn’t mean that you should.


Tom’s Take

Automation isn’t the bad guy here. We are. We are fallible creatures that make mistakes. Before, those mistakes were limited to how fast we could log into a switch. Today, computers allow us to make those mistakes much faster on a larger scale. The long-term solution to the problem is to test every change ahead of time completely and hope that your test catches all the problems. If it doesn’t then you had better hope your blackout plan works. But by introducing a rolling system similar to the Fibonacci sequence above. I think you’ll find that mistakes will be caught sooner and problems will be rectified before they are amplified. And if nothing else, you’ll have lots of time to explain to your junior admin all about the wonders of Fibonacci in nature.

Adapting Applications And Avoiding Acrobatic Adjustments

JumpingHoops

A couple of months ago, I was on a panel at TechUnplugged where we talked about scaling systems to large sizes. Here’s a link to the video of that panel:

One of the things that we discussed in that panel was applications. Toward the end of the discussion we got into a bit of a back-and-forth about applications and the systems they run on. I feel like it’s time to develop those ideas a bit more.

The Achilles’ Heel

My comments about legacy applications are pointed. If a company is spending thousands of dollars and multiples hours of time in the engineering team to reconfigure the network or the storage systems to support an old application, my response was simple: go out of business.

It does sound a bit flippant to think that a company making a profit should just close the shutters and walk away. But that’s just the problem that we’re facing in the market today. We’ve spent an inordinate amount of time creating bespoke, custom networks and systems to support applications that were written years, or even decades, ago in alien environments.

We do it every day without thinking. We have to install this specific Java version on the server to get the payroll application running. Don’t install the new security updates on this server because it breaks the HR database. We have to have the door security server running in the same VLAN as the camera system or they won’t communicate. The list seems to be endless.

The problem is not that we’re jumping through hoops to fix all of these things. The problem is really that we shouldn’t have to do it. Look at all of the work that’s being done in agile development today. Companies are creating programs that do things that we thought were impossible just a few years ago. Features are being added rapidly without compromising stability. Applications can be deployed and run in the cloud or on-site. It’s a brave new world if you’re willing to throw out the basic assumption that applications are immutable.

The Arrow of Apollo

Everyone has made concessions to an application at some point in their career. We’ve done very dumb things to make software work. We do it so often that it becomes second nature. Entire product features have been developed because applications are too stupid to work in certain scenarios.

Imagine if you will what happens when you’re told that the new HR database doesn’t like it when the database is slow to respond. You likely start thinking of all the ways that you can make the network respond faster or reroute traffic to prevent this scenario from happening. You’re willing to actively destroy and reconfigure network architecture because one software program doesn’t like latency.

How far are you willing to take it? In some cases the answers are pretty scary. Cisco has a line of industrial switches with integrated NAT because some systems don’t know how to handle not being hard-coded with IP addresses. I’ve disabled roaming between access points in a wireless network because the medical records system couldn’t handle roaming to a new AP as it made the database connection close instantly. I’ve even had to disable QoS in a network because the extra latency it was causing in the CRM application was making the sales people cranky.

All of these solutions to problems assumed that the application that was having the problem can’t be fixed. What would happen if we turned that idea back around and fought the fight differently? What if we started the discussion of application problems by working through application programming issues instead? What if the fault of the system wasn’t at the network level or the compute level but instead was at the software level because the development team did a bad job of error handling?

With all the talk of making resources more like the cloud, where the underlay systems exist in a void apart from the software running on top of them we do have the draw the line at bad application programming. Netflix can’t assume that Amazon will create a special channel link to make application recovery faster. They have to write their software in such as way that it can survive problems in the underlay and avoid them.

Application developers need to stop thinking that the underlying systems will always work perfectly and fix any problems that might crop up. Instead, those applications that are being written need to account for issues and solve their own problems. The Netflix Chaos Monkey ensures that application folks don’t get complacent and code their programs to work when other things are absent. If Netflix stopped working for anything less than AWS going down they wouldn’t have the subscriber base they enjoy today. Netflix makes money because they don’t assume that the networks and systems will always be there. They make money because they know how to react when those systems aren’t there.


Tom’s Take

The new era of networking and mobile computing have embraced software. Containers have shown us how we can create instances that can be destroyed and recreated as needed. Applications on mobile devices work like they are supposed to because we can’t get into those subsystems to make workarounds for things. It’s time for other application developers to realize that too. We need devs that are willing to write logic into their programs to take control when things go wrong. Every thing that we can do to limit custom network configurations for applications takes us one step closer to the utopia we’ve always wanted – software that can run anywhere and scale as much as needed. Because resilient software is better than trying to fix the problems in hardware any day.

Intel and the Network Arms Race

IntelLogo

Networking is undergoing a huge transformation. Software is surely a huge driver for enabling technology to grow by leaps and bounds and increase functionality. But the hardware underneath is growing just as much. We don’t seem to notice as much because the port speeds we deal with on a regular basis haven’t gotten much faster than the specs we read about years go. But the chips behind the ports are where the real action is right now.

Fueling The Engines Of Forwarding

Intel has jumped into networking with both feet and is looking to land on someone. Their work on the Data Plane Development Kit (DPDK) is helping developers write code that is highly portable across CPU architecture. We used to deal with specific microprocessors in unique configurations. A good example is Dynamips.

Most everyone is familiar with this program or the projects that spawned, Dynagen and GNS3. Dynamips worked at first because it emulated the MIPS processor found in Cisco 7200 routers. It just happened that the software used the same code for those routers all the way up to the first releases of the 15.x train. Dynamips allowed for the emulation of Cisco router software but it was very, very slow. It almost didn’t allow for packets to be processed. And most of the advanced switching features didn’t work at all thanks to ASICs.

Running networking code on generic x86 processors doesn’t provide the kinds of performance that you need in a network switching millions of packets per second. That’s why DPDK is helping developers accelerate their network packet forward to approach the levels of custom ASICs. This means that a company could write software for a switch using Intel CPUs as the base of the system and expect to get good performance out of it.

Not only can you write code that’s almost as good as the custom stuff network vendors are creating, but you can also have a relative assurance that the code will be portable. Look at the pfSense project. It can run on some very basic hardware. But the same code can also run on a Xeon if you happen to have one of those lying around. That performance boost means a lot more packet switching and processing. No modifications to the code needed. That’s a powerful way to make sure that your operating system doesn’t need radical modifications to work across a variety of platforms, from SMB and ROBO all the way to an enterprise core device.

Fighting The Good Fight

The other reason behind Intel’s drive to get DPDK to everyone is to fight off the advances of Broadcom. It used to be that the term merchant silicon meant using off-the-shelf parts instead of rolling your own chips. Now, it means “anything made by Broadcom that we bought instead of making”. Look at your favorite switching vendor and the odds are better than average that the chipset inside their most popular switches is a Broadcom Trident, Trident 2, or even a Tomahawk. Yes, even the Cisco Nexus 9000 runs on Broadcom.

Broadcom is working their way to the position of arms dealer to the networking world. It soon won’t matter what switch wins because they will all be the same. That’s part of the reason for the major differentiation in software recently. If you have the same engine powering all the switches, your performance is limited by that engine. You also have to find a way to make yourself stand out when everything on the market has the exact same packet forwarding specs.

Intel knows how powerful it is to become the arms dealer in a market. They own the desktop, laptop, and server market space. Their only real competition is AMD, and one could be forgiven for arguing that the only reason AMD hasn’t gone under yet is through a combination of video card sales and Intel making sure they won’t get in trouble for having a monopoly. But Intel also knows what it feels like to miss the boat on a chip transition. Intel missed the mobile device market, which is now ruled by ARM and custom SoC manufacturing. Intel needs to pull off a win in the networking space with DPDK to ensure that the switches running in the data center tomorrow are powered by x86, not Broadcom.


Tom’s Take

Intel’s on the right track to make some gains in networking. Their new Xeon chips with lots and lots of cores can do parallel processing of workloads. Their contributions to CoreOS will help the accelerate the adoption of containers, which are becoming a standard part of development. But the real value for Intel is helping developers create portable networking code that can be deployed on a variety of devices. That enables all kinds of new things to come, from system scaling to cloud deployment and beyond.

Video And The Death Of Dialog

video

I was reading a trivia article the other day about the excellent movie Sex, Lies, & Videotape when a comment by the director, Stephen Soderbergh, caught my eye. The quote, from this article talks about how people use video as a way to distance ourselves from events. Soderbergh used it as a metaphor in a movie made in 1989. In today’s society, I think video is having this kind of impact on our careers and our discourse in a much bigger way.

Writing It Down In Pictures

People have become huge consumers of video. YouTube gets massive amounts of traffic. Devices have video recording capabilities built in. It’s not uncommon to see a GoPro camera attached to anything and everything and see people posting videos online of things that happen.

My son is a huge fan of videos about watching other people play video games. He’ll watch hours of video of someone playing a game and narrating the experience. When I tell him that he’s capable of playing the game himself he just tells me, “It’s not as fun that way Dad.” I, too, have noticed that a lot of things that would normally have been written down are narrated as videos today.

A great example of this is the Stuck in Traffic video blog series from J Wolfgang Goerlich (@JWGoerlich). These videos are great examples of things that would have been blog posts just a few years ago but have become videos that were narrated and posted to a channel for people to consume. This is also the way that podcasts have risen to dominate the attention of people looking to consume information. But video requires a bit more attention as compared to audio-only discussions.

One of the big issues I see with videos is that they are not living, breathing documents. They exist as they are created with no way to modify the content short of destroying it and recreating it. If I write a blog post and make a factual error, it’s very easy to fix that issue. I can write a note about how I made a mistake and someone pointed it out. Or, in some cases, I can write a whole new post about the error and how I figured out what was wrong.

But video is different. I find all too often that people make factual errors in videos by either misstating something or being plain mistaken. Usually these errors aren’t corrected in the process because the subject is unaware of things. But instead of being able to correct it with a follow up or a postscript, most video producers are forced to cover the incorrect comment with a large annotation in the video window pointing out that they were wrong and force the viewer to pay attention to the comment and not the spoken word or written word in the original video.

The lack of ability to correct problems and create living documents is a huge one for video creators. Errors can’t be easily fixed. On platforms like YouTube you can’t even upload a new video in place of the old one without destroying all of your views and comments. It makes mea culpas a huge pain. There’s no process to fix things unless you catch them before posting.

On Broadcast With No Mic

The other thing that bothers me the most about video is actually very similar to  the reason why I hate keynotes. The whole process of broadcasting a message without soliciting feedback is irritating at best. With a blog post, you can have comments and discussions and even more posts about subjects that go on to create commentary. Videos are static. You can’t start watching one and them come back to it later like you can with a blog post. Of course videos can be paused, but the vast majority of video creators do their best to minimize the amount of dead air in a video.

It’s also very difficult to create discussion with videos. Instead of being able to address points one at a time and create dialog there, you are forced to address or refute points in a series with no stopping. That makes it easy to overlook things without realizing that you missed a great idea or you could have summed things up with a very easy point somewhere else.

What’s missing is the ability to let conversations develop. If you think of Slack and email as the ultimate form of conversation, video is the ultimate form of one-sided discussion. Television, movies, and other video sources are designed to deliver content with no regard for feedback going the other direction. That is the key that is missing to make video be something beyond a simple broadcast medium.


Tom’s Take

Video is a tool designed to get your views across with minimal input from viewers. It took me a while to realize this until I heard my son “closing” a video with the standard “like, share, and subscribe” type of sign off. There wasn’t any mention of leaving comments or creating a video reply. It was really at that point that a I realized that video blogs and channels are the pinnacle of insulating us from the audience. All a creator needs to do it post videos and turn off comments and you can almost guarantee that they can continue creating messages that people will hear but never be able to respond to.

Wireless As We Know It Is Dead

WirelessTombstone

Congratulations! We have managed to slay the beast that is wireless. We’ve driven a stake through it’s heart and prevented it from destroying civilization. We’ve taken a nascent technology with potential and turned it into the same faceless corporate technology as the Ethernet that it replaced. Alarmist? Not hardly. Let’s take a look at how 802.11 managed to come to an inglorious end.

Maturing Or Growing Up

Wireless used to be the wild frontier of networking. Sure, those access points bridged to the traditional network and produced packets and frames like all the other equipment. But wireless was unregulated. It didn’t conform to the plans of the networking team. People could go buy a wireless access point and put it under their desk to make that shiny new laptop with 802.11b work without needing to be plugged in.

Wireless used to be about getting connectivity. It used to be about squirreling away secret gear in the hopes of getting a leg up on the poor schmuck in the next cube that had to stay chained to his six feet of network connectivity under the desk. That was before the professionals came in. They changed wireless. They put a suit on it.

Now, wireless isn’t about making my life easier. It’s about advancing the business. It’s about planning and preparation and enabling applications. It’s about buying lots of impressively-specced access points every three years to light up new wings of the building. It’s about surveying for coverage and resource management to make sure the signal is strong everywhere. Everyone has to play nice and understand the rules.

Wireless professionals are the worst of the lot. They used to deal in black magic and secret knowledge that made them the most valuable people on the planet. They alone knew the secrets of how spectrum worked or what co-channel interference was. That was before the dark times. Before people wanted to learn more about it. Now, we can teach people these concepts. How to use tools to fix problems. Why things must be laid out in certain ways to maximize usefulness. We’ve made everyone special.

Now, the business doesn’t want wizards with strange work habits and even stranger results. They want the same predictable group that they’ve gotten for the last decade with the network team. They want people to blame when their application is slow. They want the infrastructure to work full time in every little corner of the building. And when it doesn’t, they want to know whose head must roll for this affront!

The Establishment

Another thing that destroyed wireless was everyone’s attempt to make it mainstream. Gartner’s Wired and Wireless reports didn’t help. Neither did the push to create tools that make it easy to diagnose issues with a minimum of effort. Now, companies think that wireless is something that just happens. Something that doesn’t take planning to execute. Now, wireless professionals are fired or marginalized because it shouldn’t take that much money to configure something so simple, right?

Why do wireless people need professional development? The networking team gets by with reading those old dusty books. How much can wireless really change year to year? It just gets faster and more expensive. Why should you have to learn how to put up those little access points all over again?

Now that wireless is a part of the infrastructure like switches and routers, it’s time to be forgotten. Now the business needs to focus on other technology that’s likely to be implemented incorrectly that doesn’t support the mission of the business. You know, the kinds of things that we read about in industry trade magazines that they use in sports stadiums or hospitals that sound really awesome and can’t be all that expensive, right?


Tom’s Take

We killed wireless because we used it to do the job it was designed to do. We made it boring and useful and pervasive. As soon as a technology achieves that level of use it naturally becomes something unimportant. Which you will be quick to argue about until you realize that you’re probably reading this from a smartphone that is so commonplace you forget you’re using it.

Now we talk about the apps and technology we’re building on top of wireless. Mobility, location, and other things that are more appealing to people shelling out money to buy things. Buyers don’t want boring. They want expensive gadgets they can point to and loudly proclaim that they spent a lot for this bauble.

Wireless is a victim of its own success. We fought to make it a part of the mainstream and now that it is no one cares about it any more. Now that we take it for granted we must accept that it’s not a “thing” any more. It just is.

Thoughts On Encryption

encryption

The debate on encryption has heated up significantly in the last couple of months. Most of the recent discussion has revolved around a particular device in a specific case but encryption is older than that. Modern encryption systems represent the culmination of centuries of development of making sure things aren’t seen.

Encryption As A Weapon

Did you know that twenty years ago the U.S. Government classified encryption as a munition? Data encryption was classified as a military asset and placed on the U.S. Munitions List as an auxiliary asset. The control of encryption as a military asset meant that exporting strong encryption to foreign countries was against the law. For a number of years the only thing that could be exported without fear of legal impact was regular old Data Encryption Standard (DES) methods. Even 3DES, which is theoretically much stronger but practically not much better than it’s older counterpart, was restricted for export to foreign countries.

While the rules around encryption export have been relaxed since the early 2000s, there are still some restrictions in place. Those rules are for countries that are on U.S. Government watch lists for terror states or governments deemed “rogue” states. This is why you must attest to not living in or doing business with one of those named countries when you download any software that contains encryption protocols. The problem today is that almost every piece of software includes some form of encryption thanks to ubiquitous functions like OpenSSL.

Even the father of Pretty Good Privacy (PGP) was forced to circumvent U.S. Law to get PGP into the hands of the world. Phil Zimmerman took the novel approach of printing the entirety of PGP’s source code in book form and selling it far and wide. Since books are protected speech, no one could stop them from being sold. The only barrier to creating PGP from the text was how fast one could type. Today, PGP is one of the most popular forms of encrypting written communications, such as emails.

Encryption As A Target

Today’s issues with encryption are rooted in the idea that it shouldn’t be available to people that would use it for “bad” reasons. However, instead of being able to draw a line around a place on a map and say “The people inside this line can’t get access to strong encryption”, we now live in a society where strong encryption is available on virtually any device to protect the growing amount of data we store there. Twenty years ago no one would have guessed that we could use a watch to pay for coffee, board an airplane, or communicate with loved ones.

All of that capability comes with a high information cost. Our devices need to know more and more about us in order to seem so smart. The amount of data contained in innocuous things causes no end of trouble should that data become public. Take the amount of data contained on the average boarding pass. That information is enough to know more about you than is publicly available in most places. All from one little piece of paper.

Keeping that information hidden from prying eyes is the mission of encryption. The spotlight right now is on the government and their predilection to looking at communications. Even the NSA once stated that strong encryption abroad would weaken the ability of their own technology to crack signal intelligence (SIGINT) communications. Instead, the NSA embarked on a policy of sniffing the data before it was ever encrypted by installing backdoors in ISPs and other areas to grab the data in flight. Add in the recent vulnerabilities found in the key exchange process and you can see why robust encryption is critical to protecting data.

Weakening encryption to enable it to be easily overcome by brute force is asking for a huge Pandora’s box to be opened. Perhaps in the early nineties it was unthinkable for someone to be able to command enough compute resources to overcome large number theory. Today it’s not unheard of to have control over resources vast enough to reverse engineer simple problems in a matter or hours or days instead of weeks or years. Every time a new vulnerability comes out that uses vast computing power to break theory it weakens us all.


Tom’s Take

Encryption isn’t about one device. It’s not about one person. It’s not even about a group of people that live in a place with lines drawn on a map that believe a certain way. It’s about the need for people to protect their information from exposure and harm. It’s about the need to ensure that that information can’t be stolen or modified or rerouted. It’s not about setting precedents or looking good for people in the press.

Encryption comes down to one simple question: If you dropped your phone on the floor of DefCon or BlackHat, would you feel comfortable knowing that it would take longer to break into than the average person would care to try versus picking on an easier target or a more reliable method? If the answer to that question is “yes”, then perhaps you know which side of the encryption debate you’re on without even asking the question.