About networkingnerd

Tom Hollingsworth, CCIE #29213, is a network engineer that works with Cisco, HP, Microsoft, VMware, and various other technologies. Tom has been in the IT industry since 2002, and has been a nerd since he first drew breath.

Moscone Madness

moscone1

The Moscone Center in San Francisco is a popular place for technical events.  Apple’s World Wide Developer Conference (WWDC) is an annual user of the space.  Cisco Live and VMworld also come back every few years to keep the location lively.  This year, both conferences utilized Moscone to showcase tech advances and foster community discussion.  Having attended both this year in San Francisco, I think I can finally state the following with certainty.


It’s time for tech conferences to stop using the Moscone Center.


Let’s face it.  If your conference has more than 10,000 attendees, you have outgrown Moscone.  WWDC works in Moscone because they cap the number of attendees at 5,000.  VMworld 2014 has 22,000 attendees.  Cisco Live 2014 had well over 20,000 as well.  Cramming four times the number of delegates into a cramped Moscone Center does not foster the kind of environment you want at your flagship conference.

The main keynote hall in Moscone North is too small to hold the large number of audience members.  In an age where every keynote address is streamed live, that shouldn’t be a problem.  Except that people still want to be involved and close to the event.  At both Cisco Live and VMworld, the keynote room filled up quickly and staff were directing the overflow to community spaces that were already packed too full.  Being stuffed into a crowded room with no seating or table space is frustrating.  But those are just the challenges of Moscone.  There are others as well.

I Left My Wallet In San Francisco

San Francisco isn’t cheap.  It is one of the most expensive places in the country to live.  By holding your conference in downtown San Francisco, you are forcing your 20,000+ attendees into a crowded metropolitan area with expensive hotels.  Every time I looked up a hotel room in the vicinity of VMworld or Cisco Live, I was unable to find anything for less than $300 per night.  Contrast that with Interop or Cisco Live in Las Vegas, where sub-$100 are available and $200 per night gets you into the hotel of the conference center.

Las Vegas is built for conferences.  It has adequate inexpensive hotel options.  It is designed to handle a large number of travelers arriving at once.  While spread out geographically, it is easy to navigate.  In fact, except for the lack of Uber, Las Vegas is easy to get around in than San Francisco.  I never have a problem finding a restaurant in Vegas to take a large party.  Bringing a group of 5 or 6 to a restaurant in San Francisco all but guarantees you won’t find a seat for hours.

The only real reason I can see for holding conferences at Moscone, aside from historical value, is the ease of getting materials and people into San Francisco.  Cisco and VMware both are in Silicon Valley.  Driving up to San Francisco is much easier than shipping the conference equipment to Las Vegas or Orlando.  But ease-of-transport does not make it easy on your attendees.  Add in the fact that the lower cost of setup is not reflected in additional services or reduced hotel rates and you can imagine that attendees have no real incentive to come to Moscone.


Tom’s Take

The Moscone Center is like the Cotton Bowl in Dallas.  While both have a history of producing wonderful events, both have passed their prime.  They are ill-suited for modern events.  They are cramped and crowded.  They are in unfavorable areas.  It is quickly becoming more difficult to hold events for these reasons.  But unlike the Cotton Bowl, which has almost 100 years of history, Moscone offers not real reason to stay.  Apple will always be here.  Every new iPhone, Mac, and iPad will be launched here.  But those 5,000 attendees are comfortable in one section of Moscone.  Subjecting your VMworld and Cisco Live users to these kinds of conditions is unacceptable.

It’s time for Cisco, VMware, and other large organizations to move away from Moscone.  It’s time to recognize that Moscone is not big enough for an event that tries to stuff in every user it can.  instead, conferences should be located where it makes sense.  Las Vegas, San Diego, and Orlando are conference towns.  Let’s use them as they were meant to be used.  Let’s stop the madness of trying to shoehorn 20,000 important attendees into the sardine can of the Moscone Center.

Do We Need To Redefine Open?

beer-mug

There’s a new term floating around that seems to be confusing people left and right.  It’s something that’s been used to describe a methodology as well as used in marketing left and right.  People are using it and don’t even really know what it means.  And this is the first time that’s happened.  Let’s look at the word “open” and why it has become so confusing.

Talking Beer

For those at home that are familiar with Linux, “open” wasn’t the first term to come to mind.  “Free” is another word that has been used in the past with a multitude of loaded meanings.  The original idea around “free” in relation to the Open Source movement is that the software is freely available.  There are no restrictions on use and the source is always available.  The source code for the Linux kernel can be searched and viewed at any time.

Free describes the fact that the Linux kernel is available for no cost.  That’s great for people that want to try it out.  It’s not so great for companies that want to try and build a business around it, yet Red Hat has managed to do just that.  How can they sell something that doesn’t cost anything?  It’s because they keep the notion of free sharing of code alive while charging people for support and special packages that interface with popular non-free software.

The dichotomy between unencumbered idea software and no cost software is so confusing that the movement created a phrase to describe it:

Free as in freedom, not free as in beer.

When you talk about freedom, you are unrestricted.  You can use the software as the basis for anything.  You can rewrite it to your heart’s content.  That’s your right for free software. When you talk about free beer, you set the expectation that whatever you create will be available at no charge.  Many popular Linux distributions are available at no cost.  That’s like getting beer for nothing.

Open, But Not Open Open

The word “open” is starting to take on aspects of the “free” argument.  Originally, the meaning of open came from the Open Source community.  Open Source means that you can see everything about the project.  You can modify anything.  You can submit code and improve something.  Look at the OpenDaylight project as an example.  You can sign up, download the source for a given module, and start creating working code.  That’s what Brent Salisbury (@NetworkStatic) and Matt Oswalt (@Mierdin) are doing to great effect.  They are creating the network of the future and allowing the community to do the same.

But “open” is being redefined by vendors.  Open for some means “you can work with our software via an API, but you can’t see how everything works”.  This is much like the binary-only NVIDIA driver.  Proprietary programming is pre-compiled and available to download for free, but you can’t modify the source at all.  While it works with open source software, it’s not open.

A conversation I had during Wireless Field Day 7 drove home the idea of this new “open” in relation to software defined networking.  Vendors tout open systems to their customers. They standardize on northbound interfaces that talk to orchestration platforms and have API support for other systems to call them.  But the southbound interface is proprietary.  That means that only their controller can talk to the network hardware attached to it.  Many of these systems have “open” in the name somewhere, as if to project the idea that they work with any component makeup.

This new “open” definition of having proprietary components with an API interface feels very disingenuous.  It also makes for some very awkward conversations:

$VendorA: Our system is open!

ME: Since this is an open system, I can connect my $VendorB switch and get full functionality from your controller, right?

$VendorA: What exactly do you mean by “full”?


Tom’s Take

Using “open” to market these systems is wrong.  Telling customers that you are “open” because your other equipment can program things through a narrow API is wrong.  But we don’t have a word to describe this new idea of “open”.  It’s not exactly closed.  Perhaps we can call it something else.  Maybe “ajar”.  That might make the marketing people a bit upset.  “Try our new AjarNetworking controller.  As open as we wanted to make it without closing everything.”

“Open” will probably be dominated by marketing in the next couple of year.  Vendors will try to tell you how well they interoperate with everyone.  And I will always remember how open protocols like SIP are and how everyone uses that openness against it.  If we can’t keep the definition of “open” clean, we need to find a new term.

Maybe MU-MIMO Matters

Wireless

As 802.11ac becomes more widely deployed in environments I find myself looking to the next wave and the promise it brings.  802.11ac Wave 1 for me really isn’t that groundbreaking.  It’s an incremental improvement on 802.11n.  Wave 1 really only serves to wake up the manufacturers to the fact that 5 GHz radios are needed on devices now.  The real interesting stuff comes in Wave 2.  Wider channels, more spatial streams, and a host of other improvements are on the way.  But the most important one for me is MU-MIMO.

Me Mi Mo Mum

Multi-user Multiple-Input Multiple-Output (MU-MIMO) is a huge upgrade over the MIMO specification in 802.11n.  MIMO allowed access points to multiplex signals on different channels into one data stream.  It accomplished this via Spatial Division Multiplexing (SDM).  This means that more antennas on an access point are a very good thing.  It increases the throughput above and beyond what could be accomplished with just a single antenna.  But it does have a drawback.

Single-user MIMO can only talk to one client at a time.  All the work necessary to multiplex those data streams require the full attention of a single access point for the period in time that the client is transmitting.  That means that crowded wireless networks can see reduced throughput because of shorter transmit windows.  And what wireless network isn’t crowded today?

MU-MIMO solves this problem by utilizing additional antenna capacity to transmit multiple data streams at once.  If you have spare antennas, you can send another data stream.  The AP then takes the multiple streams and stitches them back together.  This means an effective increase in throughput for certain devices even though the signal strength isn’t as high (thanks to FCC power limits).  Here’s a great video from Wireless Field Day 7 that explains the whole process:

What I found most interesting in this video is two-fold.  First, MU-MIMO is of great benefit to client devices that don’t have the maximum number of spatial streams.  Laptops are going to have three stream and four stream cards, so their MU-MIMO benefit is minimal.  However, the majority of devices on the market using wireless are mobile.  Tablets and phones don’t have multiple spatial streams, usually just one (or in some cases two).  They do this to improve battery life.  MU-MIMO will help them out considerably.

The second takeaway is that devices without a high number of receive chains will make the AP do more work.  That’s because the AP has to process the transmit stream and prevent the extra streams from being transmitted toward the wrong client.  That’s going to incur processing power.  That means you’ll need an AP with a lot of processing power.  Or a control system that can crunch those numbers for you.

When you consider that a large number of APs in a given system are sitting idle for a portion of the time it would be nice to be able to use that spare capacity for MU-MIMO processing.  In addition, having those extra antennas available to help with MU-MIMO sounding would be nice too.  There’s already been some work done on the research side of things.  Maybe we’ll soon see the ability to take the idle processing power of a wireless network and use it to boost the client throughput as needed.


 

Tom’s Take

Wireless never ceases to amaze me.  When I started writing this article, I thought I knew how MU-MIMO worked.  Thankfully, Cisco set me straight at Wireless Field Day 7.  MU-MIMO is going to help clients that can’t run high-performance networking cards.  The kinds of clients that are being sold as fast as possible today.  That means that the wireless system is already being developed to support a new kind of wireless device.

A device that doesn’t have access to limitless power from a wall socket or a battery that lasts forever.  There’s been talk of tablets with increased spatial streams for a while, but the cruel mistress of battery life will always win in the end.  That’s why MU-MIMO matters the most. Because if the wireless device can’t get more powerful, maybe it’s time for the wireless network to do the heavy lifting.

IPv6 and the VCR

IPv6 isn’t a fad.  It’s not a passing trend that will be gone tomorrow.  When Vint Cerf is on a nationally televised non-technical program talking about IPv6 that’s about as real as it’s going to get.  Add in the final depletion of IPv4 address space from the RIRs and you will see that IPv6 is a necessity.  Yet there are still people in tech that deny the increasing need for IPv6 awareness.  Those same people that say it’s not ready or that it costs too much.  It reminds me of a different argument.

IPvcr4

My house is full of technology.  Especially when it comes to movie watching.  I have DVRs for watching television, a Roku for other services, and apps on my tablet so the kids can watch media on demand.  I have a DVD player in almost every room of the house.  I also have a VCR.  It serves one purpose – to watch two movies that are only available on a video tape.  Those two movies are my wedding and the birth of my oldest son.

At first, the VCR stated connected to our television all the time.  We had some movies that we owned on VHS that we didn’t have DVD or digital copies.  As time wore on, those VHS movies were replaced by digital means.  Soon, the VCR only served to enable viewing of the aforementioned personal media.  We couldn’t get that on a DVD from just anywhere.  But the VCR stayed connected for those occasions when the movies needed to be watched.  Soon, it was too much of a hassle to reconnect the VCR, even for these family films.  Eventually, we figured out how to hook up the VCR and record the content into a digital format that’s available from non-analog sources.

How does this compare to IPv6?  Most people assume that the transition to IPv6 from IPv4 will be sudden and swift.  They will wake up one morning and find that all their servers and desktops are running global IPv6 addresses and IPv4 will be a distant memory.  In fact, nothing can be further from the truth.  IPv4 and IPv6 can coexist on the same system.  IPv6 can be implemented alongside IPv4 without disrupting connectivity.  As in the above example, you can watch DVDs and VHS tapes on the same TV without disruption.

As IPv4 address availability is restricted, may engineers will find themselves scrambling to replace existing systems and deploy new ones without access to IPv4 addresses.  That’s when the real cutover begins.  As these new systems are brought online, IPv6 will be the only address space available.  These systems will be connected with IPv6 first and provisions will be made for them to connect to other systems via IPv4.  Eventually, IPv4 will only exist for legacy systems that can’t be upgraded or migrated.  Just like the VCR above, it will only be needed for a handful of operations.

We never have to reach a point where IPv4 will be completely eliminated.  IPv4-only hosts will still be able to connect to one another so long as the global IPv4 routing table is available.  It may be reduced in size as IPv6 gains greater adoption, but it will never truly go away.  Instead, it will be like IBM’s SNA protocol.  Relevant to a few isolated hosts at best.  The world will move on and IPv6 will be the first choice for connectivity.


 Tom’s Take

I must admit that this idea was fostered from a conversation with Ed Horley (@EHorley).  The evangelism that he’s doing with both the CAv6TF and the RMv6TF is unparalleled.  They are doing their best to get the word out about IPv6 adoption.  I think it’s important for people in tech to know that IPv6 isn’t displacing IPv4.  It’s extending network functionality. It’s granting a new lease on life for systems desperately in need of address space.  And it allows IPv4-only systems to survive a little while longer.  You don’t have to watch the same old VHS tapes every day.  But you don’t have to leave the IPvcr4 hooked up all the time either.

The Pain of Licensing

Frequent readers of my blog and Twitter stream may have noticed that I have a special loathing in my heart for licensing.  I’ve been subjected to some of the craziest runarounds because of licensing departments.  I’ve had to yell over the phone to get something taken care of.  I’ve had to produce paperwork so old it was yellowed at the edges.  Why does this have to be so hard?

Licensing is a feature tracking mechanism.  Manufacturers want to know what features you are using.  It comes back to tracking research and development.  A lot of time and effort goes into making the parts and pieces of a product.  Many different departments put work into something before it goes out the door.  Vendors need a way to track how popular a given feature might be to customers.  This allows them to know where to allocate budgets for the development of said features.

Some things are considered essential.  These core pieces are usually allocated to a team that gets the right funding no matter what.  Or the features are so mature that there really isn’t much that can be done to drive additional revenue from them.  When’s the last time someone made a more streamlined version of OSPF?  But there are pieces that can be attached to OSPF that carry more weight.

Rights and Privileges

Here’s an example from Cisco.  In IOS 15, OSPF is considered a part of the core IOS functionality.  You get it no matter what on a router.  You have to pay an extra license on a switch, but that’s not part of this argument.  OSPF is a mature protocol, even in version 3 which enables IPv6 support.  If you have OSPF for IPv4, you have it for IPv6 as well.  One of the best practices for securing OSPF against intrusion is to authenticate your area 0 links.  This is something that should be considered core functionality.  And with IPv4, it is.  The MD5 authentication mechanism is built into the core OS.  But with IPv6, the IPSec license needed to authenticate the links has to be purchased as a separate license upgrade.  That’s because IPSec is part of the security license bundle.

Why the runaround for what is considered a best practice, core function?  It’s because IPv6 uses a different mechanism.  One that has more reach that simple MD5 authentication.  In order to capture the revenue that the IPSec security team is putting in, Cisco won’t just give away that functionality.  Instead, it needs to be tracked by a license.  The R&D work from that team needs to be recovered somehow.  And so you pay extra for something Cisco says you should be doing anyway.  That’s the licensing that upsets me so.

License Unit Report

How do we fix it?  The money problem is always going to be there.  Vendors have to find a way to recapture revenue for R&D while at the same time not making customers pay for things they don’t need, like advanced security or application licenses.  That’s the necessary evil of having affordable software.  But there is a fix for the feature tracking part.

We have the analytics capability with modern software to send anonymized usage statistics to manufacturers and vendors about what feature sets are being used.  Companies can track how popular IPSec is versus MD5 or other such feature comparisons.  The software doesn’t have to say who you are, just what you are using.  That would allow the budgets to be allocated exactly like they should be used, not guessing based on who bought the whole advanced communications license for Quality of Service (QoS) reporting.


 

Tom’s Take

Licensing is like NAT.  It’s a necessary evil of the world we live in.  People won’t pay for functionality they don’t use.  At the same time, they won’t use functions they have to pay extra for if they think it should have been included.  It’s a circular problem that has no clear answer.  And that’s the necessary evil of it all.

But just because it’s necessary doesn’t mean we can’t make it less evil.  We can split the reporting pieces out thanks to modern technology.  We can make sure the costs to develop these features gets driven down in the future because there are accurate statistics about usage.  Every little bit helps make licensing less of a hassle that it currently is.  It may not go away totally, but it can be marginalized to the point where it isn’t painful.

I Can’t Drive 25G

Ethernet

The race to make things just a little bit faster in the networking world has heated up in recent weeks thanks to the formation of the 25Gig Ethernet Consortium.  Arista Networks, along with Mellanox, Google, Microsoft, and Broadcom, has decided that 40Gig Ethernet is too expensive for most data center applications.  Instead, they’re offering up an alternative in the 25Gig range.

This podcast with Greg Ferro (@EtherealMind) and Andrew Conry-Murray (@Interop_Andrew) does a great job of breaking down the technical details on the reasoning behind 25Gig Ethernet.  In short, the current 10Gig connection is made of four multiplexed 2.5Gig connections.  To get to 25Gig, all you need to do is over clock those connections a little.  That’s not unprecedented, as 40Gig Ethernet accomplishes this by over clocking them to 10Gig, albeit with different optics.  Aside from a technical merit badge, one has to ask themselves “Why?”

High Hopes

As always, money is the factor here.  The 25Gig Consortium is betting that you don’t like paying a lot of money for your 40Gig optics.  They want to offer an alternative that is faster than 10Gig but cheaper than the next standard step up.  By giving you a cheaper option for things like uplinks, you gain money to spend on things.  Probably on more switches, but that’s beside the point right now.

The other thing to keep in mind, as mentioned on the Coffee Break podcast, is that the cable runs for these 25Gig connectors will likely be much shorter.  Short term that won’t mean much.  There aren’t as many long-haul connections inside of a data center as one might thing.  A short hop to the top-of-rack (ToR) switch, then another different hop to the end-of-row (EoR) or core switch.  That’s really about it.  One of the arguments against 40/100Gig is that it was designed for carriers for long-haul purposes.  25G can give you 60% of the speed of that link at a much lower cost.  You aren’t paying for functionality you likely won’t use.

Heavy Metal

Is this a good move?  That depends.  There aren’t any 25Gig cards for servers right now, so the obvious use for these connectors will be uplinks.  Uplinks that can only be used by switches that share 25Gig (and later 50Gig) connections.  As of today, that means you’re using Arista, Dell, or Brocade.  And that’s when the optics and switches actually start shipping.  I assume that existing switching lines will be able to retrofit with firmware upgrades to support the links, but that’s anyone’s guess right now.

If Mellanox and Broadcom do eventually start shipping cards to upgrade existing server hardware to 25Gig then you’ll have to ask yourself if you want to pursue the upgrade costs to drive that little extra bit of speed out of the servers.  Are you pushing the 10Gig links in your servers today?  Are they the limiting factor in your data center?  And will upgrading your servers to support twice the bandwidth per network connection help alleviate your bottlenecks? Or will they just move to the uplinks on the switches?  It’s a quandary that you have to investigate.  And that takes time and effort.


 

Tom’s Take

The very first thing I ever tweeted (4 years ago):

We’ve come a long way from ratified standards to deployment of 40Gig and 100Gig.  Uplinks in crowded data centers are going to 40Gig.  I’ve seen a 100Gig optic in the wild running a research network.  It’s interesting to see that there is now a push to get to a marginally faster connection method with 25Gig.  It reminds me of all the competing 100Mbit standards back in the day.  Every standard was close but not quite the same.  I feel that 25Gig will get some adoption in the market.  So now we’ll have to choose from 10Gig, 40Gig, or something in between to connect servers and uplinks.  It will either get sent to the standards body for ratification or die on the vine with no adoption at all.  Time will tell.

 

Security Dessert Models

MMCOOKIE

I had the good fortune last week to read a great post from Maish Saidel-Keesing (@MaishSK) that discussed security models in relation to candy.  It reminded me that I’ve been wanting to discuss security models in relation to desserts.  And since Maish got me hungry for a Snicker’s bar, I decided to lay out my ideas.

When we look at traditional security models of the past, everything looks similar to creme brûlée.  The perimeter is very crunchy, but it protects a soft interior.  This is the predominant model of the world where the “bad guys” all live outside of your network.  It works when you know where your threats are located.  This model is still in use today where companies explicitly trust their user base.

The creme brûlée model doesn’t work when you have large numbers of guest users or BYOD-enabled users.  If one of them brings in something that escapes into the network, there’s nothing to stop it from wreaking havoc everywhere.  In the past, this has caused massive virus outbreaks and penetrations from things like malicious USB sticks in the parking lot being activated on “trusted” computers internally.

A Slice Of Pie

A more modern security model looks more like an apple pie.  Rather than trusting that everything inside the network, the smart security team will realize that users are as much of a threat as the “bad guys” outside.  They crunchy crust on top will also be extended around the whole soft area inside.  Users that connect tablets, phones, and personal systems will have a very aggressive security posture in place to prevent access of anything that could cause problems in the network (and data center).  This model is great when you know that the user base is not to be trusted.  I wrote about it over a year ago on the Aruba Airheads community site.

The apple pie model does have some drawbacks.  While it’s a good idea to isolate your users outside the “crust”, you still have nothing protecting your internal systems if a rogue device or “trusted” user manages to get inside the perimeter.  The pie model will protect you from careless intrusions but not from determined attackers.  To fix that problem, you’re going to have to protect things inside the network with a crunchy shell as well.

Melts In Your Firewall, Not In Your Hand

Maish was right on when he talked about M&Ms being a good metaphor for security.  They also do a great job of visualizing the untrusted user “pie” model.  But the ultimate security model will end up looking more like an M&M cookie.  It will have a crunchy edge all around.  It will be “soft” in the middle.  And it will also have little crunchy edges around the important chocolate parts of your network (or data center).  This is how you protect the really important things like customer data.  You make sure that even getting past the perimeter won’t grant access.  This is the heart of “defense in depth”.

The M&M cookie model isn’t easy by any means. It requires you to identify assets that need to be protected.  You have to build the protection in at the beginning.  No ACLs that permit unrestricted access.  The communications between untrusted devices and trusted systems needs to be kept to the bare minimum necessary.  Too many M&Ms in a cookie makes for a bad dessert.  So too must you make sure to identify the critical systems that need be protected and group them together to minimize configuration effort and attack surface.


 

Tom’s Take

Security is a world of protecting the important things while making sure they can be used by people.  If you err on the side of too much caution, you have a useless system.  If you are too permissive, you have a security risk.  Balance is the key.  Just like the recipe for cookies, pie, or even creme brûlée the proportion of ingredients must be just right to make a tasty dessert.  In security you have to have the same mix of permissions and protections.  Otherwise, the whole thing falls apart like a deflated soufflé.