IP Class is Now in Session

You may have seen something making the rounds on Twitter this week about a couple of proposed drafts designed to alleviate the problems with IPv4 exhaustion by repurposing some old IP spaces that aren’t available for use right now. Specifically:

Ultimately, this is probably going to fail for a variety of reasons and looks like it’s more of a suggestion than anything else but I wanted to take a moment to talk about why this isn’t an effective way of fixing address issues.

Error Bearers

The first reason that the Schoen drafts are going to fail is because most of the operating systems in the world won’t allow you to use reserved spaces for a system address. Because we knew years ago that certain spaces were marked as non-usable the logic was configured into the system to disallow the use of those spaces. And even if the system isn’t configured to disallow that space there’s no guarantee the traffic is going to be transmitted.

Let’s take 127/8 as a good example. Was it a smart idea to mark 16 million addresses as loopback host-only space? Nope. But that ship has sailed and we aren’t going to be able to easily fix it. Too many systems will see any address starting with 127 in first octet and assume it’s a loopback address. In much the same way as people have been known to assume the entire 192/8 address space is RFC1918 reserved instead of 192.168.0.0/16. Logic rules and people making decisions aren’t going to trust any space being used in that manner. Even if you did something creative like using NAT and only using it internally you’re not going to be able to patch every version of every operating system in your organization.

We modify rules all the time and then have to spend years updating those modifications. Take area codes in North America for example. The old rules used to say that an area code had to have a zero or a one for the middle digit – ([2-9][0-1][2-9]) to use the Cisco UCM parlance. If your middle digit was something other than a zero or a one it wasn’t a valid NANP area code. As we began to expand the phone system in 1995 we changed those rules and now have area codes with all manner of middle numbers.

What about prefixes? Those follow rules too. NANP prefixes must not start with a zero or a one – (area code) [2-9]XX-XXXX is the way they are coded. Prefixes that start with a zero or a one are invalid and can’t be used. If we suddenly decided that we needed to open up the numbers in existing area codes and include prefixes that start with those forbidden numbers we would need to reset all the dialing rules in systems all over the country. I know that I specifically programmed my CUCM servers to send an immediate error if you dialed a prefix with a zero or a one. And all of them would have to be manually reconfigured for such a change.

In much the same way, the address spaces that are reserved today as invalid would need to be patched out of systems from home computers to phones to networking equipment. And even if you think you got it all you’re going to miss one and wonder why it isn’t working. Worse yet, it might even silently fail because you may be able to transmit data to 95% of the systems out there but some intermediate system may discard your packets as invalid and never tell you what happened. You’ll spend hours or days chasing a problem you may not even be able to fix.

Avoiding the Solutions

The easiest way to look at these proposals is by understanding that people are really, really, really in love with IPv4. Despite the fact that using the effort of the changes necessary to implement these reserved spaces would be better spent on IPv6 adoption we still get these things being submitted. There is a solution but people don’t want to use it. The modern Internet relies so much on the cloud that it would be simple to enable IPv6 in your provider space and use your engineering talent to help provide better adoption for that instead. We’re already seeing that all over places with address space has been depleted for a while now.

It may feel easier to spend more effort to revitalize the IPv4 space we all know and love. It may even feel triumphant when we’re able to reclaim address space that was wasted and use it for something productive instead of just teaching that you can’t configure devices with those spaces. And millions of devices will have IP address space to use, or more accurately there will be millions of addresses available to sell to people that will waste them anyway. Then what?

The short term gain from opening up IPv4 space at the expense of not developing IPv6 adoption is a fallacy that will end in pain. We can keep putting policy duct tape on the IPv4 exhaustion problem but we are eventually going to hit a wall we can’t overcome. The math doesn’t work when your address space is only 32 bits in total. That’s why IPv6 expanded the amount of information in the address space.

Sure, there have been mistakes in the way that IPv6 address space has been allocated and provisioned. Those mistakes would need to eventually be corrected and other configurations would need to be done in order to efficiently utilize the space. Again, the effort should be made to fix problems with a future-proof solution instead of trying our hardest to keep the lights on with the old system that’s falling apart for a few more years.


Tom’s Take

The race to find every last possible way to utilize the IPv4 space is exactly what I expected when we’re in the death throes of using it instead of IPv6. The easy solutions are done. The market and hunger for IPv4 space is only getting stronger. Instead of weaning the consumers off their existing setups and moving them to something future proof we’re feeding their needs for short term gains. If the purpose of this whole exercise was to get more address space to be rationed out for key systems to keep them online longer I might begrudgingly accept it. However, knowing that it would likely be opened up and fed to providers to be auctioned off in blocks to be ultimately wasted means all the extra effort is for no gain. These IETF drafts have a lot of issues and we’re better off letting them expire in May 2022. Because if we take up this cause and try to make them a reality we’re going to have to relearn a lot of lessons of the past we’ve forgotten.

Meraki Is Almost An Enterprise Solution

You may remember a three or so years ago when I famously declared that Meraki is not a good solution for enterprises. I know the folks at Meraki certainly haven’t. The profile for the hardware and services has slowly been rising inside of Cisco. More than just wireless with the requisite networking components, Meraki has now embraced security, SD-WAN, and even security cameras. They’ve moved into a lot of areas that customers have been asking about while also still trying to maintain the simplicity that Meraki is known for.

Having just finished up a Meraki presentation during Tech Field Day Extra at Cisco Live Europe, I thought it would be a good time to take a look at the progress that Meraki has been making toward embracing their enterprise customer base. I’m not entirely convinced that they’ve made it yet, but the progress is starting to look good.

Playing for Scale

The first area where Meraki is starting to really make strides is in the scalability department. This video from Tech Field Day Extra is all about new security features in the platform, specifically with firewalls. Take a quick look:

Toward the end of the video is one of the big things I got excited about. Meraki has introduced rule groups into their firewall platform. Sounds like a strange thing to get excited about, right? Kind of funny how the little things end up mattering in the long run. The real reason I’m getting excited about it has nothing to do with the firewall or even the interface. It has everything to do with being scalable.

One of the reasons why I’ve always seen Meraki as a solution that is more appropriate for small businesses is the lack of ability to scale. Meraki’s roots as a platform built for small deployments means that the interface has always been focused on making it easy to configure. You may remember from my last post that I wasn’t a fan of the way everything felt like it was driven through deployment wizards. Hand holding me through my first firewall deployment is awesome. Doing it for my 65th deployment is annoying. In enterprise solutions I can easily script or configure this via the command line to avoid the interface. But Meraki makes me use the UI to get it done.

Enterprises don’t run on wizards. They don’t work with assistance turned on. Enterprises need scalability. They need to be able to configure things to run across dozens or hundreds of devices quickly and predictably. They need that to happen quickly, too. Sure, it may only take four minutes to configure something via the firewall. Now, multiply that by 400 devices. Just that one little settings going to take over 26 hours to configure. And that’s assuming you don’t need to take time for a break or to sleep. When you’re working at the magnitude of an enterprise, those little shortcuts matter.

You might be saying right now, “But what about policies and groups for devices?” You would be right that groups can definitely speed up the process. But how many groups do you think the average enterprise would have for devices? I doubt all routers or switches or firewalls would conveniently fit into a single group. Or even ten groups. And there’s always the possibility that a policy change among those groups may get implemented correctly nine times out of those ten. The tenth time it gets an error that could still affect hundreds of devices. You see how this could get out of hand.

That’s why I’m excited about the little things like firewall groups. It means that Meraki is starting to see that these things need to be programmatically done. Building a series of policies in software makes it easy to deploy over and over again through scripting or enhanced device updating. Polices are good for rules. They’re not so good for devices. So the progress means that Meraki needs to keep building toward letting us script these deployments and updates across the entire organization.

Hextuple Option

The other thing that’s neatly buried at the end of the video is courtesy of a question from my friend Jody Lemoine (@GhostInTheNet). He points out that there are IPv6 addresses on the dashboard. The Meraki presenters confirm that they are testing IPv6 support natively and not just in bridged mode. Depending on when you read this post in the future, it may even be out already. You know that I’m an IPv6 fan and I’ve been tough on Meraki in the past about their support for it. So I’m happy to see that it’s in the works.

But more importantly I’m pleased that Meraki has jumped into a complex technical solution with both feet. Enterprises don’t need a basic set of services. They don’t want you to just turn on the twenty most common settings. Enterprises need odd things sometimes. They need longer VPN lifetimes or weird routing LSA support. Sometimes they need to do the really odd things because their thousand-odd devices really have to have this feature turned on to make it work.

Now, I’ve rightfully decried the idea that you should just do whatever your customers want, but the truth is that doing something silly for one customer isn’t the same as doing it for a dozen or more that are asking for a feature. Meraki has always felt shy to me about the way they implement features in their software. It’s almost the opposite of Cisco, in a way. Cisco is happy to include corner-case options on software releases on a whim to satisfy million-dollar customers. Meraki, on the other hand, has seemed to wait until well past critical mass to turn something on. It almost feels like you have to break down their door to get something advanced enabled.

To me, IPv6 is the watershed. It’s something that the general public doesn’t think they need or doesn’t know they really should have. Cisco has had IPv6 support in IOS for years. Meraki has been dragging along until they feel the need to implement it. But implementing it in 2020 makes me feel they will finally start implementing features in a way that makes sense for users. Hopefully that also means they’ll be more responsive to their Make A Wish feature requests and start indexing how many customers really want a certain feature or certain option enabled.

Napoleon Complex

The last thing that I’ll say about the transformation of Meraki is about their drive to embrace complexity. I know that Russ White and I don’t always see eye-to-eye about complexity. But I feel that hiding it is ultimately detrimental to IT staff members. Sure, you don’t want the CEO or the janitor in the wireless system deploying new SSIDs on a daily basis or disabling low data rates on APs. But you also need to have access to those features when the time comes. That was one of my big takeaways in my previous Meraki post.

I know that Meraki prides themselves on having a clean, easy-to-use interface. I know that it’s going to take a while before Meraki starts exposing their interface to power users. But, it also took Microsoft a long time to let people start doing massive modifications via PowerShell. Or Apple letting users go wild under the hood. These platforms finally opened a little and let people do some creative things. Sure, Apple IOS is still about as locked down as Meraki is, but every WWDC brings some new features that can be tinkered with here and there. I’m not expecting a fully complexity-embracing model in the next couple of years from Meraki, but I feel that the right people internally are starting to understand that growth comes in the form of enterprise customers. Enterprises don’t shy away from complexity. They don’t want it hidden. They want to see it and understand it and plan for it. And, ultimately, embrace it.


Tom’s Take

I will freely admit that I’m hard on the Meraki team. I do it because I see potential. I remember seeing them for the first time all the way back at Wireless Field Day 2 in their cramped San Francisco townhome office. In the years since the Cisco acquisition they’ve grown immensely with talent and technology. The road to becoming something more than you start out doing isn’t easy. And sometimes you need someone willing to stop you now and then and tell you where directions make more sense. I don’t believe for one moment that my armchair quarterbacking has really had a significant impact on the drive that Meraki has to get into larger enterprises. But I hope that my perspective has shown them how the practitioners of the world think and how they’re slowly transforming to meet those needs and goals. Hopefully in the next couple of years I can write the final blog post in this trilogy when Meraki embraces the enterprise completely.

Why Do You Need NAT66?

It’s hard to believe that it’s been eight years since I wrote my most controversial post ever. I get all kinds of comments on my NAT66 post even to this day. I’ve been told I’m a moron, an elitist, and someone that doesn’t understand how the Internet works. I’ve also had some good comments that highlight a specific need for tools like NAT66. I wanted to catch up with everything and ask a very important question.

WHY?

Every Tool Has A Purpose

APNIC had a great post about NAT66 back in 2018. You should totally read it. I consider it a fair review of the questions surrounding NAT’s use in 2020. Again, NAT has a purpose and when used properly and sparingly for that purpose it works well. In the case of the article, Marco Cilloni (@MCilloni) lays out the need to use NAT66 to use IPv6 at his house due to ISP insanity and the latency overhead of using tunnels with Hurricane Electric. In this specific case, NAT66 was a good tool for him to use to translate his /128 address to something useable in his network.

If you’re brave, you should delve into the comments. A couple of my favorite passages:

People from your side completely fail to understand that while NAT was not designed for security, it did bring security, in particular for home users.

Either the IPv6 community sobers up and starts actively supporting NAT or you can kiss the IPv6 protocol goodbye. I’ve put many many hours into IPv6 integration and I’m starting to realize it’s a doomed protocol and should be scraped.

It’s obvious to me a decade later that there are two camps of people that discuss NAT66: Those that are using it for a specific purpose. And those that think it has to be enabled because they use it with IPv4 networks. An example of the former:

Pieter knew what he needed to do to make it work and he did it. Not because it was something that he configured on his home router to make the Internet work. But he also knew this wasn’t the optimal solution. When you can’t change the ISP router to accept RAs you need a workaround. There are a ton of stories I get from people just like Pieter that involve a workaround or a specific thing like provider-independent address space not being available. These are the kinds of situations that tools like NAT were designed to solve.

X, Why, Z

Let’s get back to my earlier question: WHY?

For those that believe that NAT66 needs to be used everywhere, why? Is it because you’re used to using RFC1918 address space to conserve addresses? Maybe you don’t like the idea of your MAC address “leaking” onto the Internet? How about providing enhanced security, as the commenters above mentioned? There’s even a comment on my original post from late last year about using NAT to force redirects for DNS entries to avoid having Google overriding DNS on his Android phone. Why?

For me, this always comes back to the same answer I give again and again. NAT, used for a purpose, isn’t inherently evil or wrong. It’s when people start using it for things it wasn’t designed for and breaking every other thing that it becomes a crutch and a problem. And those problematic solutions always cause issues somewhere down the line.

NAT44 broke FTP. It broke end-to-end communications. It created the need for big, hungry middle boxes to track state of connections. It conflated addressing and firewall functions. Anyone that screams at me and tells me that NAT provides security by obscuring addresses usually has an edge firewall doing NAT for them. In a true one-to-one NAT configuration, accessing the public or global IP address of the host in question does nothing to halt the traffic from being passed through. People who talk to be about address obfuscation with NAT44 or NAT66 usually mean PAT, not NAT. They want one address masquerading as all the addresses in their organization to “hide” themselves from the Internet.

Why does NAT need to solve those problems? I get the complexity of using provider independent (PI) space internally and the need to configure BGP to avoid asymmetric routing. Especially if your upstream provider isn’t paying attention to communities or attributes you’re using to avoid creating a transit network or weight traffic to prefer one link over the other. NAT may be a good short-term solution for you in these cases. But do you really want to run NAT66 for the next decade because of a policy issue with your ISP? That, to me, is the ultimate in passive-aggressive configuration. Why not just jump through hoops instead of hammering out a real solution?

I may sound like a 5-year-old, but “WHY” is really the most important question you can ask. Why do you need NAT66? Why do you even need IPv6? Is it a requirement for a contract? Maybe you have to enable it to allow your application to be uploaded to the walled garden store for your mobile provider. Maybe you just want to play around with it and get an Hurricane Electric Sage t-shirt. But if you can’t answer “WHY” then all the other things you want aren’t going to make sense.

I don’t run my HE.net tunnel at home any longer. I didn’t have an advantage in running IPv6 and I had more than a few headaches that had to be addressed. There will come a day when I want to do more with IPv6, but that’s going to require more bandwidth than I have right now. I still listen to IPv6 podcasts all the time, like the excellent IPv6 Buzz featuring my friend Ed Horley. Even the experts are bullish about the adoption of IPv6 but not ignorant of the challenges. And these guys run a business doing IPv6.

For those of you that are already limbering up your fingers to leave me a comment, stop and ask yourself “WHY” first. Why do you need NAT66? Is it because you have a specific problem you can’t solve any other way? Or is it because you need NAT66 to be there just like ISDN dialer maps and reserved VLANs on switches? To me, even past my days in the trenches as an engineer, the days of needing NAT everywhere are long gone. The IPv4 Internet relies on NAT. We are hobbled by that fact. VPNs need NAT traversal. Game consoles and VoIP devices need to be NAT-aware, which increases complexity.

The IPv6 Internet doesn’t need to be like that. Let’s teach the “right” way to do things. You don’t need NAT66 for privacy. RFC 4941 exists for that. You don’t need to think NAT66 provides security. That’s what perimeter devices are for. Anything more complicated than those two “simple” cases is going to be an exercise in frustration. You don’t need to bellow from the rooftops that NAT is a necessary and mandatory piece of all Internet traffic. Instead, come back to “WHY”. Why do two devices need a middle box to translate for them and hold state information? Why can’t my ISP provide me the address space I want or the connectivity options that make this work easily? The more “WHY” questions you ask, the more the answers will come. If you just want to fold your arms together and claim that NAT is needed because “This is the way,” you may find yourself alone on the Island of NAT sooner than you think.


Tom’s Take

My identity as the “I Hate NAT” guy is pretty solid at this point in my life. It’s too late to change now. Sure, I don’t hate NAT completely. It’s like a vulture to me. It serves a specific purpose but having it everywhere is almost always problematic. By now, the only way I can work against the needless expansion of NAT is to ask hard questions. Ironically enough, the hard questions aren’t multi-part essays that require an open-book test to resolve. Often, the hardest questions can just be a single word that forces you to question what you need and why you need it.

Will Cisco Shine On?

Digital Lights

Cisco announced their new Digital Ceiling initiative today at Cisco Live Berlin. Here’s the marketing part:

And here’s the breakdown of protocols and stuff:

Funny enough, here’s a presentation from just three weeks ago at Networking Field Day 11 on a very similar subject:

Cisco is moving into Internet of Things (IoT) big time. They have at least learned that the consumer side of IoT isn’t a fun space to play in. With the growth of cloud connectivity and other things on that side of the market, Cisco knows that is an uphill battle not worth fighting. Seems they’ve learned from Linksys and Flip Video. Instead, they are tracking the industrial side of the house. That means trying to break into some networks that are very well put together today, even if they aren’t exactly Internet-enabled.

Digital Ceiling isn’t just about the PoE lighting that was announced today. It’s a framework that allows all other kinds of dumb devices to be configured and attached to networks that have intelligence built in. The Constrained Application Protocol (CoaP) is designed in such a way as to provide data about a great number of devices, not just lights. Yet lights are the launch “thing” for this line. And it could be lights out for Cisco.

A Light In The Dark

Cisco wants in on the possibility that PoE lighting will be a huge market. No other networking vendor that I know of is moving into the market. The other building automation company has the manufacturing chops to try and pull off an entire connected infrastructure for lighting. But lighting isn’t something to take lightly (pun intended).

There’s a lot that goes into proper lighting planning. Locations of fixtures and power levels for devices aren’t accidents. It requires a lot of planning and preparation. Plan and prep means there are teams of architects and others that have formulas and other knowledge on where to put them. Those people don’t work on the networking team. Any changes to the lighting plan are going to require input from these folks to make sure the illumination patterns don’t change. It’s not exactly like changing a lightbulb.

The other thing that is going to cause problems is the electrician’s union. These guys are trained and certified to put in anything that has power running to it. They aren’t just going to step aside and let untrained networking people start pulling down light fixtures and put up something new. Finding out that there are new 60-watt LED lights in a building that they didn’t put up is going to cause concern and require lots of investigation to find out if it’s even legal in certain areas for non-union, non-certified employees to install things that are only done by electricians now.

The next item of concern is the fact that you now have two parallel networks running in the building. Because everyone that I’ve talked to about PoE Lighting and Digital Ceiling has had the same response: Not On My Network. The switching infrastructure may be the same, but the location of the closets is different. The requirements of the switches are different. And the air gap between the networks is crucial to avoid any attackers compromising your lighting infrastructure and using it as an on-ramp into causing problems for your production data network.

The last issue in my mind is the least technically challenging, but the most concerning from the standpoint of longevity of the product line – Where’s the value in PoE lighting? Every piece of collateral I’ve seen and every person I’ve heard talk about it comes back to the same points. According to the experts, it’s effectively the same cost to install intelligent PoE lighting as it is to stick with traditional offerings. But that “effective” word makes me think of things like Tesla’s “Effective Lease Payment”.

By saying “effective”, what Cisco is telling you is that the up-front cost of a Digital Ceiling deployment is likely to be expensive. That large initial number comes down by things like electricity cost savings and increased efficiencies or any one of another of clever things that we tell each other to pretend that it doesn’t cost lots of money to buy new things. It’s important to note that you should evaluate the cost of a Digital Ceiling deployment completely on its own before you start taking into account any kind of cost savings in an equation that come months or years from now.


Tom’s Take

I’m not sure where IoT is going. There’s a lot of learning that needs to happen before I feel totally comfortable talking about the pros and cons of having billions of devices connected and talking to each other. But in this time of baby steps toward solutions, I can honestly say that I’m not entirely sold on Digital Ceiling. It’s clever. It’s catchy. But it ultimately feels like Cisco is headed down a path that will lead to ruin. If they can get CoAP working on many other devices and start building frameworks and security around all these devices then there is a chance that they can create a lasting line of products that will help them capitalize on the potential of IoT. What worries me is that this foray into a new realm will be fraught with bad decisions and compromises and eventually we’ll fondly remember Digital Ceiling as yet another Cisco product that had potential and not much else.

It’s Time For IPv6, Isn’t It?

 

I made a joke tweet the other day:

It did get lots of great interaction, but I feel like part of the joke was lost. Every one of the things on that list has been X in “This is the Year of X” for the last couple of years. Which is sad because I would really like IPv6 to be a big part of the year.

Ars Technica came out with a very good IPv6-focused article on January 3rd talking about the rise in adoption to 10% and how there is starting to be a shift in the way that people think about IPv6.

Old and Busted

One of the takeaways from the article that I found most interesting was a quote from Brian Carpenter of The University of Aukland about address structure. Most of the time when people complain about IPv6, they say that it’s stupid that IPv6 isn’t backwards compatible with IPv4. Carpenter has a slightly different take on it:

The fact that people don’t understand: the design flaw is in IPv4, which isn’t forwards compatible. IPv4 makes no allowance for anything that isn’t a 32 bit address. That means that, whatever the design of IPng, an IPv4-only node cannot communicate directly with an IPng-only node.

That’s a very interesting take on the problem that hadn’t occurred to me before. We’ve spent a lot of time trying to make IPv6 work with IPv4 in a way that doesn’t destroy things when the problem has nothing to do with IPv6 at all!

The real issue is that our aging IPv4 protocol just can’t be fixed to work with anything that isn’t IPv4. When you frame the argument in those terms you can start to realize why IPv4’s time is coming to an end. I’ve been told by people that moving to 128-bit addressing is overkill and that we need to just put another octet on the end of IPv4 and make them compatible so we can use things as they are for a bit longer.

Folks, the 5th octet plan would work exactly like IPv6 as far as IPv4 is concerned. The issue boils down to this: IPv4 is hard-coded to reject any address that isn’t exactly 32-bits in length. It doesn’t matter if your proposal is 33 bits or 256 bits, the result is the same: IPv4 won’t be able to directly talk to it. The only way to make IPv4 talk to any other protocol version would be extend it. And the massive amount of effort that it would take to do that is why we have things like dual stack and translation gateways for IPv6. Every plan to make IPv4 work a little longer ends in the same way: scrap it for something new.

New Hotness

Fresh from our take on how IPv4 is a busted protocol for the purposes of future proofing, lets take a look at what’s driving IPv6 right now. I got an email from my friend Eric Hileman, who runs a rather large network, asking me when he should consider his plans to transition to IPv6. My response was “wait for mobile users to force you there”.

Mobility is going to be the driving force behind IPv6 adoption. Don’t believe me? Grab the closest computing device to your body right now. I’d bet more than half of you reached for a phone or a tablet if you didn’t already have a smartwatch on your wrist. We are a society that is embracing mobile devices at an increasingly rapid rate.

Mobility is the new consumer compute. That means connectivity. Connectivity everywhere. My children don’t like not being able to consume media away from wireless access points. They would love to have a cellular device to allow them access to TV shows, movies, or even games. That generation is going to grow up to be the primary tech consumer in the next ten years.

In those intervening years, our tech infrastructure is going to balloon like never before. Smart devices will be everywhere. We are going to have terabytes of data from multiple sources flooding collectors to produce analysis that must be digested by some form of intelligence, whether organic or artificial. How do you think all that data is going to be transmitted? On a forty-year-old protocol with no concept of the future?

IPv6 has to become the network protocol to support future infrastructure. Mobility is going to drive adoption, but the tools and software we build around mobility is going to push new infrastructure adoption as well. IPv6 is going to be a huge part of that. Devices that don’t support IPv6 are going to be just like the IPv4 protocol they do support – forever stuck in the past with no concept of how the world is different now.


Tom’s Take

It’s no secret I’m an IPv6 champion. Even my distaste for NAT has more to do with its misuse with regard to IPv6 than any dislike for it as a protocol. IPv6 is something that should have been recognized ten years ago as the future of network addressing. When you look at how fast other things around us transform, like mobile computing or music/video consumption, you can see that technology doesn’t wait for stalwarts to figure things out. If you don’t want to be using IPv4 along with your VCR it’s time to start planning for how you’re going to use it.

 

 

SDN and the Trough Of Understanding

gartner_net_hype_2015

An article published this week referenced a recent Hype Cycle diagram (pictured above) from the oracle of IT – Gartner. While the lede talked a lot about the apparent “death” of Fibre Channel over Ethernet (FCoE), there was also a lot of time devoted to discussing SDN’s arrival at the Trough of Disillusionment. Quoting directly from the oracle:

Interest wanes as experiments and implementations fail to deliver. Producers of the technology shake out or fail. Investments continue only if the surviving providers improve their products to the satisfaction of early adopters.

As SDN approaches this dip in the Hype Cycle it would seem that the steam is finally being let out of the Software Defined Bubble. The Register article mentions how people are going to leave SDN by the wayside and jump on the next hype-filled networking idea, likely SD-WAN given the amount of discussion it has been getting recently. Do you know what this means for SDN? Nothing but good things.

Software Defined Hammers

Engineers have a chronic case of Software Defined Overload. SD-anything ranks right up there with Fat Free and New And Improved as the Most Overused Marketing Terms. Every solution release in the last two years has been software defined somehow. Why? Because that’s what marketing people think engineers want. Put Software Defined in the product and people will buy it hand over fist. Guess what Little Tommy Callahan has to say about that?

There isn’t any disillusionment in this little bump in the road. Quite the contrary. This is where the rubber meets the road, so to speak. This is where all the pretenders to the SDN crown find out that their solutions aren’t suited for mass production. Or that their much-vaunted hammer doesn’t have any nails to drive. Or that their hammer can’t drive a customer’s screws or rivets. And those pretenders will move on to the next hype bubble, leaving the real work to companies that have working solutions and real products that customers want.

This is no different than every other “hammer and nail” problem from the past few decades of networking. Whether it be ATM, MPLS, or any one of a dozen “game changing” technologies, the reality is that each of these solutions went from being the answer to every problem to being a specific solution for specific problems. Hopefully we’ve gotten SDN to this point before someone develops the software defined equivalent of LANE.

The Software Defined Road Ahead

Where does SD-technology go from here? Well, without marketing whipping everyone into a Software Defined Frenzy, the future is whatever developers want to make of it. Developers that come up with solutions. Developers that integrate SDN ideas into products and quietly sell them for specific needs. People that play the long game rather than hope that they can take over the world in a day.

Look at IPv6. It solves so many problems we have with today’s Internet. Not just IP exhaustion issues either. It solves issues with security, availability, and reachability. Yet we are just now starting to deploy it widely thanks to the panic of the IPocalypse. IPv6 did get a fair amount of hype twenty years ago when it was unveiled as the solution to every IP problem. After years of mediocrity and being derided as unnecessary, IPv6 is poised to finally assume its role.

SDN isn’t going to take nearly as long as IPv6 to come into play. What is going to happen is a transition away from Software Defined as the selling point. Even today we’re starting to see companies move away from SD labeling and instead use more specific terms to help customers understand what’s important about the solution and how it will help customers. That’s what is needed to clarify the confusion and reduce fatigue.

 

The IPv6 Revolution Will Not Be Broadcast

IPv6Revolution

There are days when IPv6 proponents have to feel like Chicken Little. Ever since the final allocation of the last /8s to the RIRs over four years ago, we’ve been saying that the switch to IPv6 needs to happen soon before we run out of IPv4 addresses to allocate to end users.

As of yesterday, ARIN (@TeamARIN) has 0.07 /8s left to allocate to end users. What does that mean? Realistically, according to this ARIN page that means there are 3 /21s left in the pool. There are around 450 /24s. The availability of those addresses is even in doubt, as there are quite a few requests in the pipeline. I’m sure ARIN is now more worried that they have recieved a request that they can’t fulfill and it’s already in their queue.

The sky has indeed fallen for IPv4 addresses. I’m not going to sit here and wax alarmist. My stance on IPv6 and the need to transition is well known. What I find very interesting is that the transition is not only well underway, but it may have found the driver needed to see it through to the end.

Mobility For The Masses

I’ve said before that the driver for IPv6 adoption is going to be an IPv6-only service that forces providers to adopt the standard because of customer feedback. Greed is one of the two most powerful motivators. However, fear is an equally powerful motivator. And fear of having millions of mobile devices roaming around with no address support is an equally unwanted scenario.

Mobile providers are starting to move to IPv6-only deployments for mobile devices. T-Mobile does it. So does Verizon. If a provider doesn’t already offer IPv6 connectivity for mobile devices, you can be assured it’s on their roadmap for adoption soon. The message is clear: IPv6 is important in the fastest growing segment of device adoption.

Making mobile devices the sword for IPv6 adoption is very smart. When we talk about the barriers to entry for IPv6 in the enterprise we always talk about outdated clients. There are a ton of devices that can’t or won’t run IPv6 because of an improperly built networking stack or software that was written before the dawn of DOS. Accounting for those systems, which are usually in critical production roles, often takes more time than the rest of the deployment.

Mobile devices are different. The culture around mobility has created a device refresh cycle that is measured in months, not years. Users crave the ability to upgrade to the latest device as soon as it is available for sale. Where mobile service providers used to make users wait 24 months for a device refresh, we now see them offering 12 month refreshes for a significantly increased device cost. Those plans are booming by all indications. Users want the latest and greatest devices.

With the desire of users to upgrade every year, the age of the device is no longer a barrier to IPv6 adoption. Since the average age of devices in the wild is almost certain to be less than 3 years old providers can also be sure that the capability is there for them to support IPv6. That makes it much easier to enable support for it on the entire install base of handsets.

The IPv6 Trojan Horse

Now that providers have a wide range of IPv6-enabled devices on their networks, the next phase of IPv6 adoption can sneak into existence. We have a lot of IPv6-capable devices in the world, but very little IPv6 driven content. Aside from some websites being reachable over IPv6 we don’t really have any services that depend on IPv6.

Thanks to mobile, we have a huge install base of devices that we now know are IPv6 capable. Since the software for these devices is largely determined by the user base through third party app development, this is the vector for widespread adoption of IPv6. Rather than trumpeting the numbers, mobile providers and developers can quiety enable IPv6 without anyone even realizing it.

Most app resources must live in the cloud by design. Lots of them live in places like AWS. Service providers enable translation gateways at their edge to translate IPv6 requests into IPv4 requests. What would happen if the providers started offering native IPv6 connectivity to AWS? How would app developers react if there was a faster, native connetivity option to their resources? Given the huge focus on speed for mobile applications, do you think they would continue using a method that forces them to use slow translation devices? Or would they jump at the chance to speed up their devices?

And that’s the trojan horse. The app itself spurs adoption of IPv6 without the user even knowing what’s happened. When’s the last time you needed to know your IP on a mobile device? Odds are very good it would take you a while to even find out where that information is stored. The app-driven focus of mobile devices has eliminated the need for visibility for things like IP addresses. As long as the app connects, who cares what addressing scheme it’s using? That makes shifting the underlying infrastructure from IPv4 to IPv6 fairly inconsequential.


Tom’s Take

IPv6 adoption is going to happen. We’ve reached the critical tipping point where the increased cost of acquiring IPv4 resources will outweigh the cost of creating IPv6 connectivity. Thanks to the focus on mobile technologies and third-party applications, the IPv6 revolution will happen quietly at night when IPv6 connectivity to cloud resources becomes a footnote in some minor point update release notes.

Once IPv6 connectity is enabled and preferred in mobile applications, the adoption numbers will go up enough that CEOs focused on Gartner numbers and keeping up with the Joneses will finally get off their collective laurels and start pushing enteprise adoption. Only then will the analyst firms start broadcasting the revolution.

Could IPv6 Drown My Wireless Network?

IPv6WiFi

By now, the transition to adopt IPv6 networks is in full swing. Registrars are running out of prefixes and new users overseas are getting v6-only allocations for new circuits. Mobile providers are going v6-only and transition mechanisms are in place to ease the migration. You can hear about some of these topics in this recent roundtable recorded at Interop last week:

One of the converstaions that I had with Ed Horley (@EHorley) during Interop opened my eyes to another problem that we will soon be facing with IPv6 and legacy technology. Only this time, it’s not because of a numbering scheme. It’s because of old hardware.

Rate Limited

Technology always marches on. Things that seemed magical to us just five years ago are now antiquated and slow. That’s the problem with the original 802.11 specification. It supported wireless data rates at a paltry 1 Mbps and 2 Mbps. When 802.11b was released, it raised the rates to 5.5 Mbps and 11 Mbps. Those faster data rates, combined with a larger coverage area, helped 802.11b become commercially successful.

Now, we have 802.11n with data rates in the hundreds of Mbps. We also have 802.11ac right around the corner with rates approaching 1 Gbps. It’s a very fast wireless world. But thanks to the need to be backwards compatible with existing technology, even those fast new 802.11n access points still support the old 1 & 2 Mbps data rates of 802.11. This is great if you happen to have a wireless device from the turn of the millenium. It’s not so great if you are a wireless engineer supporting such an installation.

Wireless LAN professionals have been talking for the past couple of years about how important it is to disable the 1, 2, and 5.5 Mbps data rates in your wireless networks. Modern equipment will only utilize those data rates when far away from the access point and modern design methodology ensures you won’t be far from an access point. Removing support for those devices forces the hardware to connect at a higher data rate and preserve the overall air quality. Even one 802.11b device connecting to your wireless network can cause the whole network to be dragged down to slow data rates. How important is it to disable these settings? Meraki’s dashboard allows you to do it with one click:

MerakiDataRates

Flood Detected

How does this all apply to IPv6? Well, it turns out that that multicast has an interesting behavior on wireless networks. It seeks out the lowest data rate to send traffic. This ensures that all recievers get the packet. I asked Matthew Gast (@MatthewSGast) of Aerohive about this recently. He said that it’s up to the controller manufacturer to decide how multicast is handled. When I gave him an inquisitive look, he admitted that many vendors leave it up to the lowest common denominator, which is usually the 1 Mbps or 2 Mbps data rate.

This isn’t generally a problem. IPv4 multicast tends to be sporadic and short-lived at best. Most controllers have mechanisms in place for dealing with this, either by converting those multicasts to unicasts or by turning off mulitcast completely. A bit of extra traffic on the low data rates isn’t noticeable.

IPv6 has a much higher usage of multicast, however. Router Advertisements (RAs) and Multicast Listener Discovery (MLD) are crictical to the operation of IPv6. So critical, in fact, that turning off Global Multicast on a Cisco wireless controller doesn’t disable RAs and MLD from happening. You must have multicast running for IPv6.

What happens when all that multicast traffic from IPv6 hits a controller with the lower data rates enable? Gridlock. Without vendor intervention the MLD and RA packets will hop down to the lowest data rate and start flooding the network. Listeners will respond on the same low data rate and drag the network down to an almost-unusable speed. You can’t turn off the multicast to fix it either.

The solution is to prevent this all in the first place. You need to turn off the 802.11b low data rates on your controller. 1 Mbps, 2 Mbps, and 5.5 Mbps should all be disabled, both as a way to prevent older, slower clients from connecting to your wireless network and to keep newer clients running IPv6 from swamping it with multicast traffic.

There may still be some older clients out there that absolutely require 802.11b data rates, like medical equipment, but the best way to deal with these problematic devices is isolation. These devices likely won’t be running IPv6 any time in the future. Isolating them onto a separate SSID running the 802.11b data rates is the best way to ensure they don’t impact your other traffic. Make sure you read up on how to safely disable data rates and do it during a testing window to ensure you don’t break everything in the world. But you’ll find your network much more healthy when you do.


Tom’s Take

Legacy technology support is critical for continued operation. We can’t just drop something because we don’t want to deal with it any more. Anyone who has ever called a technical support line feels that pain. However, when the new technology doesn’t feasably support working with older tech, it’s time to pull the plug. Whether it be 802.11b data rates or something software related, like dropping PowerPC app support in OS X, we have to keep marching forward to make new devices run at peak performance.

IPv6 has already exposed limitations of older technologies like DHCP and NAT. Wireless thankfully has a much easier way to support transitions. If you’re still running 802.11b data rates, turn them off. You’ll find your IPv6 transition will be much less painful if you do. And you can spend more time working with tech and less time trying to tread water.

 

Expiring The Internet

Embed from Getty Images

An article came out this week that really made me sigh.  The title was “Six Aging Protocols That Could Cripple The Internet“.  I dove right in, expecting to see how things like Finger were old and needed to be disabled and removed.  Imagine my surprise when I saw things like BGP4 and SMTP on the list.  I really tried not to smack my own forehead as I flipped through the slideshow of how the foundation of the Internet is old and is at risk of meltdown.

If It Ain’t Broke

Engineers love the old adage “If it ain’t broke, don’t fix it!”.  We spend our careers planning and implementing.  We also spend a lot of time not touching things afterwards in order to prevent it from collapsing in a big heap.  Once something is put in place, it tends to stay that way until something necessitates a change.

BGP is a perfect example.  The basics of BGP remain largely the same from when it was first implemented years ago.  BGP4 has been in use since 1994 even though RFC 4271 didn’t officially formalize it until 2006.  It remains a critical part of how the Internet operates.  According to the article, BGP is fundamentally flawed because it’s insecure and trust based.  BGP hijacking has been occurring with more frequency, even as resources to combat it are being hotly debated.  Is BGP to blame for the issue?  Or is it something more deeply rooted?

Don’t Fix It

The issues with BGP and other protocols mentioned in the article, including IPv6, aren’t due to the way the protocol was constructed.  It is due in large part to the humans that implement those protocols.  BGP is still in use in the current insecure form because it works.  And because no one has proposed a simple replacement that accomplishes the goal of fixing all the problems.

Look at IPv6.  It solves the address exhaustion issue.  It solves hierarchical addressing issues.  It restores end-to-end connectivity on the Internet.  And yet adoption numbers still languish in the single digit percentage.  Why?  Is it because IPv6 isn’t technically superior? Or because people don’t want to spend the time to implement it?  It’s expensive.  It’s difficult to learn.  Reconfiguring infrastructures to support new protocols takes time and effort.  Things that are better spent on answering user problems or taking on additional tasks as directed by management that doesn’t care about BGP insecurity until the Internet goes down.

It Hurts When I Do This

Instead of complaining about how protocols are insecure, the solution to the problem should be two fold: First, we need to start building security into protocols and expiring their older, insecure versions.  POODLE exploited SSLv3, an older version that served as a fallback to TLS.  While some old browsers still used SSLv3, the simple easy solution was to disable SSL and force people to upgrade to TLS-capable clients.  In much the same way, protocols like NTP and BGP can be modified to use more security.  Instead of suggesting that people use those versions, architects and engineers need to implement those versions and discourage use of the old insecure protocols by disabling them.  It’s not going to be easy at first.  But as the movement gains momentum, the solution will work.

The next step in the process is to build easy-to-configure replacements.  Bolting security onto a protocol after the fact does stop the bleeding.  But to fix the underlying symptoms, the security needs to be baked into the protocol from the beginning.  But doing this with an entirely new protocol that has no backwards compatibility will be the death of that new protocol.  Just look at how horrible the transition to IPv6 has been.  Lack of an easy transition coupled with no monetary incentive and lack of an imminent problem caused the migration to drag out until the eleventh hour.  And even then there is significant pushback against an issue that can no longer be ignored.

Building the next generation of secure Internet protocols is going to take time and transparent effort.  People need to see what’s going into something to understand why it’s important.  The next generation of engineers needs to understand why things are being built the way they are.  We’re lucky in that many of the people responsible for building the modern Internet are still around.  When asked about limitations in protocols the answer remains remarkably the same – “We never thought it would be around this long.”

The longevity of quick fixes seems to be the real issue.  When the next generation of Internet protocols is built there needs to be a built-in expiration date.  A point-of-no-return beyond which the protocol will cease to function.  And there should be no method for extending the shelf life of a protocol to forestall it’s demise.  In order to ensure that security can’t be compromised we have to resign ourselves to the fact that old things need to be put out to pasture.  And the best way to ensure that new things are put in place to supplant them is to make sure the old things go away on time.


Tom’s Take

The Internet isn’t fundamentally broken.  It’s a collection of things that work well in their roles that maybe have been continued a little longer than necessary.  The probability of an exploit being created for something rises with every passing day it is still in use.  We can solve the issues of the current Internet with some security engineering.  But to make sure the problem never comes back again, we have to make a hard choice to expire protocols on a regular basis.  It will mean work.  It will create strife.  And in the end we’ll all be better for it.

SLAAC May Save Your Life

Flatline

A chance dinner conversation at Wireless Field Day 7 with George Stefanick (@WirelesssGuru) and Stewart Goumans (@WirelessStew) made me think about the implications of IPv6 in healthcare.  IPv6 adoption hasn’t been very widespread, thanks in part to the large number of embedded devices that have basic connectivity.  Basic in this case means “connected with an IPv4 address”.  But that address can lead to some complications if you aren’t careful.

In a hospital environment, the units that handle medicine dosing are connected to the network.  This allows the staff to program them to properly dispense medications to patients.  Given an IP address in a room, staff can ensure that a patient is getting just the right amount of painkillers and not an overdose.  Ensuring a device gets the same IP each time is critical to making this process work.  According to George, he has recommended that the staff stop using DHCP to automatically assign addresses and instead move to static IP configuration to ensure there isn’t a situation where a patient inadvertently receives a fatal megadose of medication, such as when an adult med unit is accidentally used in a pediatric application.

This static policy does lead to network complications.  Units removed from their proper location are rendered unusable because of the wrong IP.  Worse yet, since those units don’t check in with the central system any more, they could conceivably be incorrectly configured.  At best this will generate a support call to the IT staff.  At worst…well, think lawsuit.  Not to mention what happens if there is a major change to gateway information.  That would necessitate massive manual reconfiguration and downtime until those units can be fixed.

Cut Me Some SLAAC

This is where IPv6 comes into play, especially with Stateless Address Auto Configuration (SLAAC).  By using an automatically configured address structure that never changes, this equipment will never go offline.  It will always be checked in on the network.  There will be little chance of the unit dispensing the wrong amount of medication.  The medical unit will have history available via the same IPv6 address.

There are challenges to be sure.  IPv6 support isn’t cheap or easy.  In the medical industry, innovation happens at a snail’s pace.  These devices are just now starting to have mobile connectivity for wireless use.  Asking the manufacturers to add IPv6 into their networking stacks is going to take years of development at best.

Having the equipment attached all the time also brings up issues with moving the unit to the wrong area and potentially creating a fatal situation.  Thankfully, the router advertisements can help there.  If the RA for a given subnet locks the unit into a given prefix, controls can be enacted on the central system to ensure that devices in that prefix range will never be allowed to dispense medication above or below a certain amount.  While this is more of a configuration on the medical unit side, IPv6 provides the predictability needed to ensure those devices can be found and cataloged.  Since a SLAAC addressed device using EUI-64 will always get the same address, you never have to guess which device got a specific address.  You will always know from the last 64 bits which device you are speaking to, no matter the prefix.

Tom’s Take

Healthcare is a very static industry when it comes to innovation.  Medical companies are trying to keep pace with technology advances while at the same time ensuring that devices are safe and do not threaten the patients they are supposed to protect.  IPv6 can give us an extra measure of safety by ensure devices receive the same address every time.  IPv6 also gives the consistency needed to compile proper reporting about the operation of a device and even the capability of finding that device when it is moved to an improper location.  Thanks to SLAAC and IPv6, one day these networking technologies might just save your life.