Will Cisco Shine On?

Digital Lights

Cisco announced their new Digital Ceiling initiative today at Cisco Live Berlin. Here’s the marketing part:

And here’s the breakdown of protocols and stuff:

Funny enough, here’s a presentation from just three weeks ago at Networking Field Day 11 on a very similar subject:

Cisco is moving into Internet of Things (IoT) big time. They have at least learned that the consumer side of IoT isn’t a fun space to play in. With the growth of cloud connectivity and other things on that side of the market, Cisco knows that is an uphill battle not worth fighting. Seems they’ve learned from Linksys and Flip Video. Instead, they are tracking the industrial side of the house. That means trying to break into some networks that are very well put together today, even if they aren’t exactly Internet-enabled.

Digital Ceiling isn’t just about the PoE lighting that was announced today. It’s a framework that allows all other kinds of dumb devices to be configured and attached to networks that have intelligence built in. The Constrained Application Protocol (CoaP) is designed in such a way as to provide data about a great number of devices, not just lights. Yet lights are the launch “thing” for this line. And it could be lights out for Cisco.

A Light In The Dark

Cisco wants in on the possibility that PoE lighting will be a huge market. No other networking vendor that I know of is moving into the market. The other building automation company has the manufacturing chops to try and pull off an entire connected infrastructure for lighting. But lighting isn’t something to take lightly (pun intended).

There’s a lot that goes into proper lighting planning. Locations of fixtures and power levels for devices aren’t accidents. It requires a lot of planning and preparation. Plan and prep means there are teams of architects and others that have formulas and other knowledge on where to put them. Those people don’t work on the networking team. Any changes to the lighting plan are going to require input from these folks to make sure the illumination patterns don’t change. It’s not exactly like changing a lightbulb.

The other thing that is going to cause problems is the electrician’s union. These guys are trained and certified to put in anything that has power running to it. They aren’t just going to step aside and let untrained networking people start pulling down light fixtures and put up something new. Finding out that there are new 60-watt LED lights in a building that they didn’t put up is going to cause concern and require lots of investigation to find out if it’s even legal in certain areas for non-union, non-certified employees to install things that are only done by electricians now.

The next item of concern is the fact that you now have two parallel networks running in the building. Because everyone that I’ve talked to about PoE Lighting and Digital Ceiling has had the same response: Not On My Network. The switching infrastructure may be the same, but the location of the closets is different. The requirements of the switches are different. And the air gap between the networks is crucial to avoid any attackers compromising your lighting infrastructure and using it as an on-ramp into causing problems for your production data network.

The last issue in my mind is the least technically challenging, but the most concerning from the standpoint of longevity of the product line – Where’s the value in PoE lighting? Every piece of collateral I’ve seen and every person I’ve heard talk about it comes back to the same points. According to the experts, it’s effectively the same cost to install intelligent PoE lighting as it is to stick with traditional offerings. But that “effective” word makes me think of things like Tesla’s “Effective Lease Payment”.

By saying “effective”, what Cisco is telling you is that the up-front cost of a Digital Ceiling deployment is likely to be expensive. That large initial number comes down by things like electricity cost savings and increased efficiencies or any one of another of clever things that we tell each other to pretend that it doesn’t cost lots of money to buy new things. It’s important to note that you should evaluate the cost of a Digital Ceiling deployment completely on its own before you start taking into account any kind of cost savings in an equation that come months or years from now.


Tom’s Take

I’m not sure where IoT is going. There’s a lot of learning that needs to happen before I feel totally comfortable talking about the pros and cons of having billions of devices connected and talking to each other. But in this time of baby steps toward solutions, I can honestly say that I’m not entirely sold on Digital Ceiling. It’s clever. It’s catchy. But it ultimately feels like Cisco is headed down a path that will lead to ruin. If they can get CoAP working on many other devices and start building frameworks and security around all these devices then there is a chance that they can create a lasting line of products that will help them capitalize on the potential of IoT. What worries me is that this foray into a new realm will be fraught with bad decisions and compromises and eventually we’ll fondly remember Digital Ceiling as yet another Cisco product that had potential and not much else.

It’s Time For IPv6, Isn’t It?

 

I made a joke tweet the other day:

It did get lots of great interaction, but I feel like part of the joke was lost. Every one of the things on that list has been X in “This is the Year of X” for the last couple of years. Which is sad because I would really like IPv6 to be a big part of the year.

Ars Technica came out with a very good IPv6-focused article on January 3rd talking about the rise in adoption to 10% and how there is starting to be a shift in the way that people think about IPv6.

Old and Busted

One of the takeaways from the article that I found most interesting was a quote from Brian Carpenter of The University of Aukland about address structure. Most of the time when people complain about IPv6, they say that it’s stupid that IPv6 isn’t backwards compatible with IPv4. Carpenter has a slightly different take on it:

The fact that people don’t understand: the design flaw is in IPv4, which isn’t forwards compatible. IPv4 makes no allowance for anything that isn’t a 32 bit address. That means that, whatever the design of IPng, an IPv4-only node cannot communicate directly with an IPng-only node.

That’s a very interesting take on the problem that hadn’t occurred to me before. We’ve spent a lot of time trying to make IPv6 work with IPv4 in a way that doesn’t destroy things when the problem has nothing to do with IPv6 at all!

The real issue is that our aging IPv4 protocol just can’t be fixed to work with anything that isn’t IPv4. When you frame the argument in those terms you can start to realize why IPv4’s time is coming to an end. I’ve been told by people that moving to 128-bit addressing is overkill and that we need to just put another octet on the end of IPv4 and make them compatible so we can use things as they are for a bit longer.

Folks, the 5th octet plan would work exactly like IPv6 as far as IPv4 is concerned. The issue boils down to this: IPv4 is hard-coded to reject any address that isn’t exactly 32-bits in length. It doesn’t matter if your proposal is 33 bits or 256 bits, the result is the same: IPv4 won’t be able to directly talk to it. The only way to make IPv4 talk to any other protocol version would be extend it. And the massive amount of effort that it would take to do that is why we have things like dual stack and translation gateways for IPv6. Every plan to make IPv4 work a little longer ends in the same way: scrap it for something new.

New Hotness

Fresh from our take on how IPv4 is a busted protocol for the purposes of future proofing, lets take a look at what’s driving IPv6 right now. I got an email from my friend Eric Hileman, who runs a rather large network, asking me when he should consider his plans to transition to IPv6. My response was “wait for mobile users to force you there”.

Mobility is going to be the driving force behind IPv6 adoption. Don’t believe me? Grab the closest computing device to your body right now. I’d bet more than half of you reached for a phone or a tablet if you didn’t already have a smartwatch on your wrist. We are a society that is embracing mobile devices at an increasingly rapid rate.

Mobility is the new consumer compute. That means connectivity. Connectivity everywhere. My children don’t like not being able to consume media away from wireless access points. They would love to have a cellular device to allow them access to TV shows, movies, or even games. That generation is going to grow up to be the primary tech consumer in the next ten years.

In those intervening years, our tech infrastructure is going to balloon like never before. Smart devices will be everywhere. We are going to have terabytes of data from multiple sources flooding collectors to produce analysis that must be digested by some form of intelligence, whether organic or artificial. How do you think all that data is going to be transmitted? On a forty-year-old protocol with no concept of the future?

IPv6 has to become the network protocol to support future infrastructure. Mobility is going to drive adoption, but the tools and software we build around mobility is going to push new infrastructure adoption as well. IPv6 is going to be a huge part of that. Devices that don’t support IPv6 are going to be just like the IPv4 protocol they do support – forever stuck in the past with no concept of how the world is different now.


Tom’s Take

It’s no secret I’m an IPv6 champion. Even my distaste for NAT has more to do with its misuse with regard to IPv6 than any dislike for it as a protocol. IPv6 is something that should have been recognized ten years ago as the future of network addressing. When you look at how fast other things around us transform, like mobile computing or music/video consumption, you can see that technology doesn’t wait for stalwarts to figure things out. If you don’t want to be using IPv4 along with your VCR it’s time to start planning for how you’re going to use it.

 

 

SDN and the Trough Of Understanding

gartner_net_hype_2015

An article published this week referenced a recent Hype Cycle diagram (pictured above) from the oracle of IT – Gartner. While the lede talked a lot about the apparent “death” of Fibre Channel over Ethernet (FCoE), there was also a lot of time devoted to discussing SDN’s arrival at the Trough of Disillusionment. Quoting directly from the oracle:

Interest wanes as experiments and implementations fail to deliver. Producers of the technology shake out or fail. Investments continue only if the surviving providers improve their products to the satisfaction of early adopters.

As SDN approaches this dip in the Hype Cycle it would seem that the steam is finally being let out of the Software Defined Bubble. The Register article mentions how people are going to leave SDN by the wayside and jump on the next hype-filled networking idea, likely SD-WAN given the amount of discussion it has been getting recently. Do you know what this means for SDN? Nothing but good things.

Software Defined Hammers

Engineers have a chronic case of Software Defined Overload. SD-anything ranks right up there with Fat Free and New And Improved as the Most Overused Marketing Terms. Every solution release in the last two years has been software defined somehow. Why? Because that’s what marketing people think engineers want. Put Software Defined in the product and people will buy it hand over fist. Guess what Little Tommy Callahan has to say about that?

There isn’t any disillusionment in this little bump in the road. Quite the contrary. This is where the rubber meets the road, so to speak. This is where all the pretenders to the SDN crown find out that their solutions aren’t suited for mass production. Or that their much-vaunted hammer doesn’t have any nails to drive. Or that their hammer can’t drive a customer’s screws or rivets. And those pretenders will move on to the next hype bubble, leaving the real work to companies that have working solutions and real products that customers want.

This is no different than every other “hammer and nail” problem from the past few decades of networking. Whether it be ATM, MPLS, or any one of a dozen “game changing” technologies, the reality is that each of these solutions went from being the answer to every problem to being a specific solution for specific problems. Hopefully we’ve gotten SDN to this point before someone develops the software defined equivalent of LANE.

The Software Defined Road Ahead

Where does SD-technology go from here? Well, without marketing whipping everyone into a Software Defined Frenzy, the future is whatever developers want to make of it. Developers that come up with solutions. Developers that integrate SDN ideas into products and quietly sell them for specific needs. People that play the long game rather than hope that they can take over the world in a day.

Look at IPv6. It solves so many problems we have with today’s Internet. Not just IP exhaustion issues either. It solves issues with security, availability, and reachability. Yet we are just now starting to deploy it widely thanks to the panic of the IPocalypse. IPv6 did get a fair amount of hype twenty years ago when it was unveiled as the solution to every IP problem. After years of mediocrity and being derided as unnecessary, IPv6 is poised to finally assume its role.

SDN isn’t going to take nearly as long as IPv6 to come into play. What is going to happen is a transition away from Software Defined as the selling point. Even today we’re starting to see companies move away from SD labeling and instead use more specific terms to help customers understand what’s important about the solution and how it will help customers. That’s what is needed to clarify the confusion and reduce fatigue.

 

The IPv6 Revolution Will Not Be Broadcast

IPv6Revolution

There are days when IPv6 proponents have to feel like Chicken Little. Ever since the final allocation of the last /8s to the RIRs over four years ago, we’ve been saying that the switch to IPv6 needs to happen soon before we run out of IPv4 addresses to allocate to end users.

As of yesterday, ARIN (@TeamARIN) has 0.07 /8s left to allocate to end users. What does that mean? Realistically, according to this ARIN page that means there are 3 /21s left in the pool. There are around 450 /24s. The availability of those addresses is even in doubt, as there are quite a few requests in the pipeline. I’m sure ARIN is now more worried that they have recieved a request that they can’t fulfill and it’s already in their queue.

The sky has indeed fallen for IPv4 addresses. I’m not going to sit here and wax alarmist. My stance on IPv6 and the need to transition is well known. What I find very interesting is that the transition is not only well underway, but it may have found the driver needed to see it through to the end.

Mobility For The Masses

I’ve said before that the driver for IPv6 adoption is going to be an IPv6-only service that forces providers to adopt the standard because of customer feedback. Greed is one of the two most powerful motivators. However, fear is an equally powerful motivator. And fear of having millions of mobile devices roaming around with no address support is an equally unwanted scenario.

Mobile providers are starting to move to IPv6-only deployments for mobile devices. T-Mobile does it. So does Verizon. If a provider doesn’t already offer IPv6 connectivity for mobile devices, you can be assured it’s on their roadmap for adoption soon. The message is clear: IPv6 is important in the fastest growing segment of device adoption.

Making mobile devices the sword for IPv6 adoption is very smart. When we talk about the barriers to entry for IPv6 in the enterprise we always talk about outdated clients. There are a ton of devices that can’t or won’t run IPv6 because of an improperly built networking stack or software that was written before the dawn of DOS. Accounting for those systems, which are usually in critical production roles, often takes more time than the rest of the deployment.

Mobile devices are different. The culture around mobility has created a device refresh cycle that is measured in months, not years. Users crave the ability to upgrade to the latest device as soon as it is available for sale. Where mobile service providers used to make users wait 24 months for a device refresh, we now see them offering 12 month refreshes for a significantly increased device cost. Those plans are booming by all indications. Users want the latest and greatest devices.

With the desire of users to upgrade every year, the age of the device is no longer a barrier to IPv6 adoption. Since the average age of devices in the wild is almost certain to be less than 3 years old providers can also be sure that the capability is there for them to support IPv6. That makes it much easier to enable support for it on the entire install base of handsets.

The IPv6 Trojan Horse

Now that providers have a wide range of IPv6-enabled devices on their networks, the next phase of IPv6 adoption can sneak into existence. We have a lot of IPv6-capable devices in the world, but very little IPv6 driven content. Aside from some websites being reachable over IPv6 we don’t really have any services that depend on IPv6.

Thanks to mobile, we have a huge install base of devices that we now know are IPv6 capable. Since the software for these devices is largely determined by the user base through third party app development, this is the vector for widespread adoption of IPv6. Rather than trumpeting the numbers, mobile providers and developers can quiety enable IPv6 without anyone even realizing it.

Most app resources must live in the cloud by design. Lots of them live in places like AWS. Service providers enable translation gateways at their edge to translate IPv6 requests into IPv4 requests. What would happen if the providers started offering native IPv6 connectivity to AWS? How would app developers react if there was a faster, native connetivity option to their resources? Given the huge focus on speed for mobile applications, do you think they would continue using a method that forces them to use slow translation devices? Or would they jump at the chance to speed up their devices?

And that’s the trojan horse. The app itself spurs adoption of IPv6 without the user even knowing what’s happened. When’s the last time you needed to know your IP on a mobile device? Odds are very good it would take you a while to even find out where that information is stored. The app-driven focus of mobile devices has eliminated the need for visibility for things like IP addresses. As long as the app connects, who cares what addressing scheme it’s using? That makes shifting the underlying infrastructure from IPv4 to IPv6 fairly inconsequential.


Tom’s Take

IPv6 adoption is going to happen. We’ve reached the critical tipping point where the increased cost of acquiring IPv4 resources will outweigh the cost of creating IPv6 connectivity. Thanks to the focus on mobile technologies and third-party applications, the IPv6 revolution will happen quietly at night when IPv6 connectivity to cloud resources becomes a footnote in some minor point update release notes.

Once IPv6 connectity is enabled and preferred in mobile applications, the adoption numbers will go up enough that CEOs focused on Gartner numbers and keeping up with the Joneses will finally get off their collective laurels and start pushing enteprise adoption. Only then will the analyst firms start broadcasting the revolution.

Could IPv6 Drown My Wireless Network?

IPv6WiFi

By now, the transition to adopt IPv6 networks is in full swing. Registrars are running out of prefixes and new users overseas are getting v6-only allocations for new circuits. Mobile providers are going v6-only and transition mechanisms are in place to ease the migration. You can hear about some of these topics in this recent roundtable recorded at Interop last week:

One of the converstaions that I had with Ed Horley (@EHorley) during Interop opened my eyes to another problem that we will soon be facing with IPv6 and legacy technology. Only this time, it’s not because of a numbering scheme. It’s because of old hardware.

Rate Limited

Technology always marches on. Things that seemed magical to us just five years ago are now antiquated and slow. That’s the problem with the original 802.11 specification. It supported wireless data rates at a paltry 1 Mbps and 2 Mbps. When 802.11b was released, it raised the rates to 5.5 Mbps and 11 Mbps. Those faster data rates, combined with a larger coverage area, helped 802.11b become commercially successful.

Now, we have 802.11n with data rates in the hundreds of Mbps. We also have 802.11ac right around the corner with rates approaching 1 Gbps. It’s a very fast wireless world. But thanks to the need to be backwards compatible with existing technology, even those fast new 802.11n access points still support the old 1 & 2 Mbps data rates of 802.11. This is great if you happen to have a wireless device from the turn of the millenium. It’s not so great if you are a wireless engineer supporting such an installation.

Wireless LAN professionals have been talking for the past couple of years about how important it is to disable the 1, 2, and 5.5 Mbps data rates in your wireless networks. Modern equipment will only utilize those data rates when far away from the access point and modern design methodology ensures you won’t be far from an access point. Removing support for those devices forces the hardware to connect at a higher data rate and preserve the overall air quality. Even one 802.11b device connecting to your wireless network can cause the whole network to be dragged down to slow data rates. How important is it to disable these settings? Meraki’s dashboard allows you to do it with one click:

MerakiDataRates

Flood Detected

How does this all apply to IPv6? Well, it turns out that that multicast has an interesting behavior on wireless networks. It seeks out the lowest data rate to send traffic. This ensures that all recievers get the packet. I asked Matthew Gast (@MatthewSGast) of Aerohive about this recently. He said that it’s up to the controller manufacturer to decide how multicast is handled. When I gave him an inquisitive look, he admitted that many vendors leave it up to the lowest common denominator, which is usually the 1 Mbps or 2 Mbps data rate.

This isn’t generally a problem. IPv4 multicast tends to be sporadic and short-lived at best. Most controllers have mechanisms in place for dealing with this, either by converting those multicasts to unicasts or by turning off mulitcast completely. A bit of extra traffic on the low data rates isn’t noticeable.

IPv6 has a much higher usage of multicast, however. Router Advertisements (RAs) and Multicast Listener Discovery (MLD) are crictical to the operation of IPv6. So critical, in fact, that turning off Global Multicast on a Cisco wireless controller doesn’t disable RAs and MLD from happening. You must have multicast running for IPv6.

What happens when all that multicast traffic from IPv6 hits a controller with the lower data rates enable? Gridlock. Without vendor intervention the MLD and RA packets will hop down to the lowest data rate and start flooding the network. Listeners will respond on the same low data rate and drag the network down to an almost-unusable speed. You can’t turn off the multicast to fix it either.

The solution is to prevent this all in the first place. You need to turn off the 802.11b low data rates on your controller. 1 Mbps, 2 Mbps, and 5.5 Mbps should all be disabled, both as a way to prevent older, slower clients from connecting to your wireless network and to keep newer clients running IPv6 from swamping it with multicast traffic.

There may still be some older clients out there that absolutely require 802.11b data rates, like medical equipment, but the best way to deal with these problematic devices is isolation. These devices likely won’t be running IPv6 any time in the future. Isolating them onto a separate SSID running the 802.11b data rates is the best way to ensure they don’t impact your other traffic. Make sure you read up on how to safely disable data rates and do it during a testing window to ensure you don’t break everything in the world. But you’ll find your network much more healthy when you do.


Tom’s Take

Legacy technology support is critical for continued operation. We can’t just drop something because we don’t want to deal with it any more. Anyone who has ever called a technical support line feels that pain. However, when the new technology doesn’t feasably support working with older tech, it’s time to pull the plug. Whether it be 802.11b data rates or something software related, like dropping PowerPC app support in OS X, we have to keep marching forward to make new devices run at peak performance.

IPv6 has already exposed limitations of older technologies like DHCP and NAT. Wireless thankfully has a much easier way to support transitions. If you’re still running 802.11b data rates, turn them off. You’ll find your IPv6 transition will be much less painful if you do. And you can spend more time working with tech and less time trying to tread water.

 

Expiring The Internet

An article came out this week that really made me sigh.  The title was “Six Aging Protocols That Could Cripple The Internet“.  I dove right in, expecting to see how things like Finger were old and needed to be disabled and removed.  Imagine my surprise when I saw things like BGP4 and SMTP on the list.  I really tried not to smack my own forehead as I flipped through the slideshow of how the foundation of the Internet is old and is at risk of meltdown.

If It Ain’t Broke

Engineers love the old adage “If it ain’t broke, don’t fix it!”.  We spend our careers planning and implementing.  We also spend a lot of time not touching things afterwards in order to prevent it from collapsing in a big heap.  Once something is put in place, it tends to stay that way until something necessitates a change.

BGP is a perfect example.  The basics of BGP remain largely the same from when it was first implemented years ago.  BGP4 has been in use since 1994 even though RFC 4271 didn’t officially formalize it until 2006.  It remains a critical part of how the Internet operates.  According to the article, BGP is fundamentally flawed because it’s insecure and trust based.  BGP hijacking has been occurring with more frequency, even as resources to combat it are being hotly debated.  Is BGP to blame for the issue?  Or is it something more deeply rooted?

Don’t Fix It

The issues with BGP and other protocols mentioned in the article, including IPv6, aren’t due to the way the protocol was constructed.  It is due in large part to the humans that implement those protocols.  BGP is still in use in the current insecure form because it works.  And because no one has proposed a simple replacement that accomplishes the goal of fixing all the problems.

Look at IPv6.  It solves the address exhaustion issue.  It solves hierarchical addressing issues.  It restores end-to-end connectivity on the Internet.  And yet adoption numbers still languish in the single digit percentage.  Why?  Is it because IPv6 isn’t technically superior? Or because people don’t want to spend the time to implement it?  It’s expensive.  It’s difficult to learn.  Reconfiguring infrastructures to support new protocols takes time and effort.  Things that are better spent on answering user problems or taking on additional tasks as directed by management that doesn’t care about BGP insecurity until the Internet goes down.

It Hurts When I Do This

Instead of complaining about how protocols are insecure, the solution to the problem should be two fold: First, we need to start building security into protocols and expiring their older, insecure versions.  POODLE exploited SSLv3, an older version that served as a fallback to TLS.  While some old browsers still used SSLv3, the simple easy solution was to disable SSL and force people to upgrade to TLS-capable clients.  In much the same way, protocols like NTP and BGP can be modified to use more security.  Instead of suggesting that people use those versions, architects and engineers need to implement those versions and discourage use of the old insecure protocols by disabling them.  It’s not going to be easy at first.  But as the movement gains momentum, the solution will work.

The next step in the process is to build easy-to-configure replacements.  Bolting security onto a protocol after the fact does stop the bleeding.  But to fix the underlying symptoms, the security needs to be baked into the protocol from the beginning.  But doing this with an entirely new protocol that has no backwards compatibility will be the death of that new protocol.  Just look at how horrible the transition to IPv6 has been.  Lack of an easy transition coupled with no monetary incentive and lack of an imminent problem caused the migration to drag out until the eleventh hour.  And even then there is significant pushback against an issue that can no longer be ignored.

Building the next generation of secure Internet protocols is going to take time and transparent effort.  People need to see what’s going into something to understand why it’s important.  The next generation of engineers needs to understand why things are being built the way they are.  We’re lucky in that many of the people responsible for building the modern Internet are still around.  When asked about limitations in protocols the answer remains remarkably the same – “We never thought it would be around this long.”

The longevity of quick fixes seems to be the real issue.  When the next generation of Internet protocols is built there needs to be a built-in expiration date.  A point-of-no-return beyond which the protocol will cease to function.  And there should be no method for extending the shelf life of a protocol to forestall it’s demise.  In order to ensure that security can’t be compromised we have to resign ourselves to the fact that old things need to be put out to pasture.  And the best way to ensure that new things are put in place to supplant them is to make sure the old things go away on time.


Tom’s Take

The Internet isn’t fundamentally broken.  It’s a collection of things that work well in their roles that maybe have been continued a little longer than necessary.  The probability of an exploit being created for something rises with every passing day it is still in use.  We can solve the issues of the current Internet with some security engineering.  But to make sure the problem never comes back again, we have to make a hard choice to expire protocols on a regular basis.  It will mean work.  It will create strife.  And in the end we’ll all be better for it.

SLAAC May Save Your Life

Flatline

A chance dinner conversation at Wireless Field Day 7 with George Stefanick (@WirelesssGuru) and Stewart Goumans (@WirelessStew) made me think about the implications of IPv6 in healthcare.  IPv6 adoption hasn’t been very widespread, thanks in part to the large number of embedded devices that have basic connectivity.  Basic in this case means “connected with an IPv4 address”.  But that address can lead to some complications if you aren’t careful.

In a hospital environment, the units that handle medicine dosing are connected to the network.  This allows the staff to program them to properly dispense medications to patients.  Given an IP address in a room, staff can ensure that a patient is getting just the right amount of painkillers and not an overdose.  Ensuring a device gets the same IP each time is critical to making this process work.  According to George, he has recommended that the staff stop using DHCP to automatically assign addresses and instead move to static IP configuration to ensure there isn’t a situation where a patient inadvertently receives a fatal megadose of medication, such as when an adult med unit is accidentally used in a pediatric application.

This static policy does lead to network complications.  Units removed from their proper location are rendered unusable because of the wrong IP.  Worse yet, since those units don’t check in with the central system any more, they could conceivably be incorrectly configured.  At best this will generate a support call to the IT staff.  At worst…well, think lawsuit.  Not to mention what happens if there is a major change to gateway information.  That would necessitate massive manual reconfiguration and downtime until those units can be fixed.

Cut Me Some SLAAC

This is where IPv6 comes into play, especially with Stateless Address Auto Configuration (SLAAC).  By using an automatically configured address structure that never changes, this equipment will never go offline.  It will always be checked in on the network.  There will be little chance of the unit dispensing the wrong amount of medication.  The medical unit will have history available via the same IPv6 address.

There are challenges to be sure.  IPv6 support isn’t cheap or easy.  In the medical industry, innovation happens at a snail’s pace.  These devices are just now starting to have mobile connectivity for wireless use.  Asking the manufacturers to add IPv6 into their networking stacks is going to take years of development at best.

Having the equipment attached all the time also brings up issues with moving the unit to the wrong area and potentially creating a fatal situation.  Thankfully, the router advertisements can help there.  If the RA for a given subnet locks the unit into a given prefix, controls can be enacted on the central system to ensure that devices in that prefix range will never be allowed to dispense medication above or below a certain amount.  While this is more of a configuration on the medical unit side, IPv6 provides the predictability needed to ensure those devices can be found and cataloged.  Since a SLAAC addressed device using EUI-64 will always get the same address, you never have to guess which device got a specific address.  You will always know from the last 64 bits which device you are speaking to, no matter the prefix.

Tom’s Take

Healthcare is a very static industry when it comes to innovation.  Medical companies are trying to keep pace with technology advances while at the same time ensuring that devices are safe and do not threaten the patients they are supposed to protect.  IPv6 can give us an extra measure of safety by ensure devices receive the same address every time.  IPv6 also gives the consistency needed to compile proper reporting about the operation of a device and even the capability of finding that device when it is moved to an improper location.  Thanks to SLAAC and IPv6, one day these networking technologies might just save your life.