Rome Wasn’t Software Defined In A Day

Everywhere you turn, people are talking about software defined networking.  The influence can be felt in every facet of the industry.  Major players are trying to come to grips with the shift in power.  Small vendors are ramping up around ideas and looking to the future.  Professionals are simultaneously excited for change and fearful of upsetting the status quo.  But will all of these things happen overnight?

Not Built In A Day, But Laying Bricks Every Hour

The truth of SDN is that it’s going to take some time for all the pieces to fall into place.  Take a look at the recent Apple Pay launch.  Inside of a week, it has risen to become a very significant part of the mobile payment industry, even if the installed base of users is exclusive to iPhone [6,6+] owners.  But did this revolution happen in the span of a couple of days?

Apple Pay works because Apple spent months, if not years, designing the best way to provide transactions from a phone.  It leverages TouchID for security, a concept introduced last year.  It uses Near Field Communication (NFC) readers, which have been in place for a couple of years.  I even talked about NFC three years ago.  That means the technology to support Apple Pay has been in place for a while.

That kind of support structure is needed to make SDN work the way we want it to.  There’s no magic wand that will convert your infrastructure to SDN overnight.  There is no SDNecronomicon to reference for solving scaling issues or interoperability concerns.  What’s required is the hard work of taking the ideas and processes around SDN and implementing them today.

SDN feels like a radical shift to traditional networking because it’s a foreign concept.  If you had told the first generation iPhone users their device would be a application computer with the capability to pay for purchases wirelessly they would have laughed at you and told you it was a fantasy.  That sufficiently advanced technology was beyond their understanding at the time.

SDN is no different.  The steps being taken today to solve traditional networking problems will feel antiquated in four to five years.  But that foundation must be laid in order to make SDN work in the future.  SDN won’t transform the industry overnight, but we have to keep making advances and pushing forward to make the important gains no matter how small they are.

Not Built In A Day, But It Burned In One

The fear of SDN leads to the dark side of standards adoption.  Arguments. In-fighting. Posturing. Interests making decisions not because they are right for customers but because they protect market share.  If SDN fails in the long term, it will be because of these dark elements and not a technological constraint.

Nothing is immune to politics.  Linux has been more or less standardized for years.  Yet tech advances are still hotly debated.  Go mention systemd to your local Linux hacker and prepare for the onslaught of discussion.  Linux has had much less pressure from these kinds of discussions by virtue of the core kernel being very stable and maintained by a small team.  SDN is very different.

The competing ideas around SDN drive innovation, but also threaten it.  The industry will eventually standardize on OpenDaylight for their controller, much like the server industry standardized on Linux for appliances.  But will that same consensus lead to stagnation? Will innovation simply happen as vendors attempt to modify ODL just enough to make their offering look superior?  Before you say that it’s impossible go and find a reference TRILL implementation.

SDN will succeed because the momentum behind it won’t allow it to fail.  But much like Rome, we need to build SDN with the proper architecture.  Simply laying bricks haphazardly won’t fix our problems.  If the infrastructure is bad enough, we may even need our own Nero to “fix” things again.  Momentum without direction is a useless force.  We need to ensure that SDN is headed in the right direction where it benefits customers and users first.  Profit margins are secondary to that.


Tom’s Take

An idea can transform an industry.  A simple thought about making things better can drag the community out of stagnation and into a Renaissance.  That we are witness to an industry shift is undeniable at this point, especially given that so many things are becoming “software defined”.  However, we must face the truth that this little hobby project won’t produce results overnight.  Hard work and planning will win the day.  Rome went from being a village among hills to the largest power in the Western world.  But that didn’t happen overnight.  The long game of SDN needs to be played one move at a time.  And the building of the SDN empire will take more than a single day.

The Atomic Weight of Policy

Helium-atom

The OpenDaylight project put out a new element this week with their Helium release.  The second release is usually the most important, as it shows that you have a real project on your hands and not just a bunch of people coding in the back room to no avail.  Not that something like that was going to happen to ODL.  The group of people involved in the project have the force of will to change the networking world.

Helium is already having an effect on the market.  Brocade announced their Vyatta Controller last week, which is based on Helium code.  Here’s a handy video as well.  The other thing that Helium has brought forth is the ongoing debate about network policy.  And I think that little gem is going to have more weight in the long run than anything else.

The Best Policy

Helium contains group-based policies for making groups of network objects talk to each other.  It’s a crucial step to bring ODL from an engineering hobby project to a full-fledged product that can be installed by someone that isn’t a code wizard.  That’s because most of the rest of the world, including IT people, don’t speak in specific terms with devices.  They have an idea of what needs to be done and they rely on the devices to implement that idea.

Think about a firewall.  When is the last time you wrote a firewall rule by hand? Unless you are a CLI masochist, you use the GUI to craft a policy that says something like “prevent DNS queries from any device that isn’t a DNS server (referenced by this DNS Server object list)”.  We create those kinds of policies because we can’t account for every new server appearing on the network that wants to make a DNS query.  We block the ones that don’t need to be doing it, and we modify the DNS Server object list to add new servers when needed.

Yet, in networking we’re still playing with access lists and VLAN databases.  We must manually propagate this information throughout the network when updates occur.  Because no one relies on VTP to add and prune information in the network.  The tools we’ve been using to do this work are archaic and failure-prone at best.

Staying Neutral

The inclusion of policy components of Helium will go a long way to paving the road for more of the same in future releases.  Of course, there is already talk about Cisco’s OpFlex and how it didn’t make the cut for Helium despite being one of the most dense pieces of code proposed for ODL.  It’s good that Cisco and ODL both realized that putting out code that wasn’t ready was only going to hurt in the long run.  When you’ve had seven or eight major releases, you can lay an egg with a poorly implemented feature and it won’t be the end of the world.  But if you put out a stinker in the second release, you may not make it to the third.

But this isn’t about OpFlex.  Or Congress. Or any other policy language that might be proposed for ODL in the future.  It’s about making sure that ODL is a policy-driven controller infrastructure.  Markus Nispel of Extreme talked about SDN at Wireless Field Day 7.  In that presentation, he said that he thinks the industry will standardize on ODL as the northbound interface in the network.  For those not familiar, the northbound interface is the one that is user-facing.  We interact with the northbound controller interface while the southbound controller interface programs the devices.

ODL is making sure the southbound interface is OpenFlow.  What they need to do now is ensure the northbound interface can speak policy to the users configuring it.  We’ve all heard the rhetoric of “network engineers need to learn coding” or “non-programmers will be out of a job soon”.  But the harsher reality is that while network programmers are going to be very busy people on the backend, the day-to-day operations of the network will be handled by different teams.  Those teams don’t speak IOS, Junos, or OpenFlow.  They think in policy-based thoughts.

Ops teams don’t want to know how something is going to work when implemented.  They don’t want to spend hours troubleshooting why a VLAN didn’t populate only to find they typoed the number.  They want to plug information into a policy and let the controller do the rest.  That’s what Helium has started and what ODL represents.  An interface into the network for mortal Ops teams.  A way to make that work for everyone, whether it be an OpFlex interface into Cisco APIC programming ACI or a Congress interface to an NSX layer.  If you are a the standard controller, you need to talk to everyone no matter their language.

Tom’s Take

ODL is going to be the controller in SDN.  There is too much behind it to let it fail.  Vendors are going to adopt it as their base standard for SDN.  They may add bells and whistles but it will still be ODL underneath.  That means that ODL needs to set the standard for network interaction.  And that means policy.  Network engineers complain about fragility in networking and how it’s hard to do and in the very same breath say they don’t want the CLI to go away.  What they need to say is that policy gives everyone the flexibility to create robust, fault-tolerant configurations with a minimum of effort while still allowing for other interface options, like CLI and API.  If you are the standard, you can’t play favorites.  Policy is going to be the future of networking interfaces.  If you don’t believe that, you’ll find yourself quickly crushed under the weight of reality.

SDN Use Case: Content Filtering

K-12 schools face unique challenges with their IT infrastructure.  Their user base needs access to a large amount of information while at the same time facing restrictions.  While it does sound like some corporate network policies, the restrictions in the education environment are legal in nature.  Schools must find new ways to provide the assurance of restricting content without destroying their network in the process.  Which lead me to ask: Can SDN Help?

Online Protection

The government E-Rate program gives schools money each year under Priority 1 funding for Internet access.  Indeed, the whole point of the E-Rate program is to get schools connected to the Internet.  But we all know the Internet comes with a bevy of distractions. Many of those distractions are graphic in nature and must be eliminated in a school.  Because it’s the law.

The Children’s Internet Protection Act (CIPA) mandates that schools and libraries receiving E-Rate funding for high speed broadband Internet connections must filter those connections to remove questionable content.  Otherwise they risk losing funding for all E-Rate services.  That makes content filters very popular devices in schools, even if they aren’t funded by E-Rate (which they aren’t).

Content filters also cause network design issues.  In the old days, we had to put the content filter servers on a hub along with the outbound Internet router in order to insure they could see all the traffic and block the bad bits.  That became increasing difficult as network switch speeds increased.  Forcing hundreds of megabits through a 10Mbit hub was counterproductive.  Moving to switchport mirroring did alleviate the speed issues, but still caused network design problems.  Now, content filters can run on firewalls and bastion host devices or are enabled via proxy settings in the cloud.  But we all know that running too many services on a firewall causes performance issues.  Or leads to buying a larger firewall than needed.

Another issue that has crept up as of late is the use of Virtual Private Networking (VPN) as a way to defeat the content filter.  Setting up an SSL VPN to an outside, non-filtered device is pretty easy for a knowledgeable person.  And if that fails, there are plenty of services out there dedicated to defeating content filtering.  While the aim of these service is noble, such as bypassing the Great Firewall of China or the mandated Internet filtering in the UK, they can also be used to bypass the CIPA-mandated filtering in schools as well.  It’s a high-tech game of cat-and-mouse.  Blocking access to one VPN only for three more to pop up to replace it.

Software Defined Protection

So how can SDN help?  Service chaining allows traffic to be directed to a given device or virtual appliance before being passed on through the network.  This great presentation from Networking Field Day 7 presenter Tail-f Networks shows how service chaining can force traffic through security devices like IDS/IPS and through content filters as well.  There is no need to add hubs or mirrored switch ports in your network.  There is also no need to configure traffic to transit the same outbound router or firewall, thereby creating a single point of failure.  Thanks to the magic of SDN, the packets go to the filter automatically.  That’s because they don’t really have a choice.

It also works well for providers wanting to offer filtering as a service to schools.  This allows a provider to configure the edge network to force traffic to a large central content filter cluster and ensure delivery.  It also allows the service provider network to operate without impact to non-filtered customers.  That’s very useful even in ISPs dedicated to education institutions, as the filter provisions for K-12 schools don’t apply to higher education facilities, like colleges and universities.  Service chaining would allow the college to stay free and clear while the high schools are cleansed of inappropriate content.

The VPN issue is a thorny one for sure.  How do you classify traffic that is trying to hide from you?  Even services like Netflix are having trouble blocking VPN usage and they stand to lose millions if they can’t.  How can SDN help in this situation? We could build policies to drop traffic headed for known VPN endpoints.  That should take care of the services that make it easy to configure and serve as a proxy point.  But what about those tech-savvy kids that setup SSL VPNs back home?

Luckily, SDN can help there as well.  Many unified threat management appliances offer the ability to intercept SSL conversations.  This is an outgrowth of sites like Facebook defaulting to SSL to increase security.  SSL intercept essentially acts as a man-in-the-middle attack.  The firewall decrypts the SSL conversation, scans the packets, and re-encrypts it using a different certificate.  When the packets come back in, the process is reversed.  This SSL intercept capability would allow those SSL VPN packets to be dropped when detected.  The SDN component ensures that HTTPS traffic is always redirected to a device that and do SSL intercept, rather than taking a path through the network that might lead to a different exit point.

Tom’s Take

Content filtering isn’t fun.  I’ve always said that I don’t envy the jobs of people that have to wade through the unsavory parts of the Internet to categorize bits as appropriate or not.  It’s also a pain for network engineers that need to keep redesigning the networking and introducing points of failure to meet federal guidelines for decency.  SDN holds the promise of making that easier.  In the above Tail-f example, the slide deck shows a UI that allows simple blocking of common protocols like Skype.  This could be extended to schools where student computers and wireless networks are identified and bad programs are disallowed while web traffic is pushed to a filter and scrubbed before heading out to the Wild Wild Web.  SDN can’t solve every problem we might have, but if it can make the mundane and time consuming problems easier, it might just give people the breathing room they need to work on the bigger issues.

Why is Lync The Killer SDN Application?

lync-logo

The key to showing the promise of SDN is to find a real-world application to showcase capabilities.  I recently wrote about using SDN to slice education networks.  But this is just one idea.  When it comes to real promise, you have to shelve the approach and trot out a name.  People have to know that SDN will help them fix something on their network or optimize an troublesome program.  And it appears that application is Microsoft Lync.

MIssing Lync

Microsoft Lync (neè Microsoft Office Communicator) is a software application designed to facilitate communications.  It includes voice calling capability, instant messaging, and collaboration tools.  The voice part is particularly appealing to small businesses.  With a Microsoft Office 365 for Business subscription, you gain access to Lync.  That means introducing a voice soft client to your users.  And if it’s available, people are going to use it.

As a former voice engineer, I can tell you that soft clients are a bit of a pain to configure.  They have their own way of doing things.  Especially when Quality of Service (QoS) is involved.  In the past, tagging soft client voice packets with Cisco Jabber required setting cluster-wide parameters for all clients.  It was all-or-nothing.  There were also plans to use things like Cisco MediaNet to tag Jabber packets, but this appears to be an old method.  It was much easier to use physical phones and set their QoS value and leave the soft phones relegated to curiosities.

Lync doesn’t use a physical phone.  It’s all software based.  And as usage has grown, the need to categorize all that traffic for optimal network transmission has become important.  But configuring QoS for Lync is problematic at best.  Microsoft guidelines say to configure the Lync servers with QoS policies.  Some enterprising users have found ways to configure clients with Group Policy settings based on port numbers.  But it’s all still messy.

A Lync To The Future

That’s where SDN comes into play.  Dynamic QoS policies can be pushed into switches on the fly to recognize Lync traffic coming from hosts and adjust the network to suit high traffic volumes.  Video calls can be separated from audio calls and given different handling based on a variety of dynamically detected settings.  We can even guarantee end-to-end QoS and see that guarantee through the visibility that protocols like OpenFlow enable in a software defined network.

SDN QoS is very critical to the performance of soft clients.  Separating the user traffic from the critical communication traffic requires higher-order thinking and not group policy hacking.  Ensuring delivery end-to-end is only possible with SDN because of overall visibility.  Cisco has tried that with MediaNet and Cisco Prime, but it’s totally opt-in.  If there’s a device that Prime doesn’t know about inline, it will be a black hole.  SDN gives visibility into the entire network.

The Weakest Lync

That’s not to say that Lync doesn’t have it’s issues.  Cisco Jabber was an application built by a company with a strong networking background.  It reports information to the underlying infrastructure that allows QoS policies to work correctly.  The QoS marking method isn’t perfect, but at least it’s available.

Lync packets don’t respect the network.  Lync always assumes there will be adequate bandwidth.  Why else would it not allow for QoS tagging?  It’s also apparent when you realize that some vendors are marking packets with non-standard CoS/DSCP markings.  Lync will happily take override priority on the network.  Why doesn’t Lync listen to the traffic conditions around it?  Why does it exist in a vacuum?

Lync is an application written by application people.  It’s agnostic of networks.  It doesn’t know if it’s running on a high-speed LAN or across a slow WAN connection.  It can be ignorant of the network because that part just gets figured out.  It’s a classic example of a top-down program.  That’s why SDN holds such promise for Lync.  Because the app itself is unaware of the networks, SDN allows it to keep chugging along in bliss while the controllers and forwarding tables do all the heavy lifting.  And that’s why the tie between Lync and SDN is so strong.  Because SDN makes Lync work better without the need to actually do anything about Lync, or your server infrastructure in general.


Tom’s Take

Lync is the poster child for bad applications that can be fixed with SDN.  And when I say poster child, I mean it.  Extreme Networks, Aruba Networks, and Meru are all talking about using SDN in concert with Lync.  Some are using OpenFlow, others are using proprietary methods.  The end result is making a smarter network to handle an application living in a silo.  Cisco Jabber is easy to program for QoS because it was made by networking folks.  Lync is a pain because it lives in the same world as Office and SQL Server.  It’s only when networks become contentious that we have to find novel ways of solving problems.  Lync is the use case for SDN for small and medium enterprises focused primarily on wireless connectivity.  Because making Lync behave in that environment is indistinguishable from magic, at least without SDN.


If you want to see some interesting conversations about Lync and SDN, especially with OpenFlow, tune into SDN Connect Live on September 18th.  Meru Networks and Tech Field Day will have a roundtable discussion about Lync, featuring Lync experts and SDN practitioners.

An Educational SDN Use Case

During the VMUnderground Networking Panel, we had a great discussion about software defined networking (SDN) among other topics. Seems that SDN is a big unknown for many out there. One of the reasons for this is the lack of specific applications of the technology. OSPF and SQL are things that solve problems. Can the same be said of SDN? One specific question regarded how to use SDN in small-to-medium enterprise shops. I fired off an answer from my own experience:

Since then, I’ve had a few people using my example with regards to a great use case for SDN. I decided that I needed to develop it a bit more now that I’ve had time to think about it.

Schools are a great example of the kinds of “do more with less” organizations that are becoming more common. They have enterprise-class networks and needs and live off budgets that wouldn’t buy janitorial supplies. In fact, if it weren’t for E-Rate, most schools would have technology from the Stone Age. But all this new tech doesn’t help if you can’t find a way for it to be used to the fullest for the purposes of educating students.

In my example, I talked about the shift from written standardized testing to online assessments. Oklahoma and Indiana are leading the way in getting rid of Scantrons and #2 pencils in favor of keyboards and monitors. The process works well for the most part with proper planning. My old job saw lots of scrambling to prep laptops, tablets, and lab machines for the rigors of running the test. But no amount of pre-config could prepare for the day when it was time to go live. On those days, the network was squarely in the sights of the administration.

I’ve seen emails go around banning non-testing students from the computers. I’ve seen hard-coded DNS entries on testing machines while the rest of the school had DNS taken offline to keep them from surfing the web. Redundant circuits. QoS policies that would make voice engineers cry. All in the hope of keeping the online test bandwidth free to get things completed in the testing window. All the while, I was thinking to myself, “There has got to be an easier way to do this…”

Redefining with Software

Enter SDN. The original use case for SDN at Stanford was network slicing. The Next-Gen Network Team wanted to use the spare capacity of the network for testing without crashing the whole system. Being able to reconfigure the network on the fly is a huge leap forward. Pushing policy into devices without CLI cuts down on the resume-generating events (RGE) in production equipment. So how can we apply these network slicing principles to my example?

On the day of the test, have the configuration system push down a new policy that gives the testing machines a guaranteed amount of bandwidth. This reservation will ensure each machine is able to get what it needs without being starved out. With SDN, we can set this policy on a per-IP basis to ensure it is enforced. This slice will exist separate from the production network to ensure that no one starting a huge FTP transfer or video upload will disrupt testing. By leaving the remaining bandwidth intact for the rest of the school’s production network administrators can ensure that the rest of the student body isn’t impacted during testing. With the move toward flipped classrooms and online curriculum augmentation, having available bandwidth is crucial.

Could this be done via non-SDN means? Sure. Granted, you’ll have to plan the QoS policy ahead of time and find a way to classify your end-user workstations properly. You’ll also have to tune things to make sure no one is dominating the test machine pool. And you have to get it right on every switch you program. And remove it when you’re done. Unless you missed a student or a window, in which case you’ll need to reprogram everything again. SDN certainly makes this process much easier and less disruptive.


Tom’s Take

SDN isn’t a product. You can’t order a part number for SDN-001 and get a box labeled SDN. Instead, it’s a process. You apply SDN to existing environment and extend the capabilities through new processes. Those processes need use cases. Use cases drive business cases. Business cases provide buy in from the stakeholders. That’s why discussing cases like the one above are so important. When you can find a use for SDN, you can get people to accept it. And that’s half the battle.

SLAAC May Save Your Life

Flatline

A chance dinner conversation at Wireless Field Day 7 with George Stefanick (@WirelesssGuru) and Stewart Goumans (@WirelessStew) made me think about the implications of IPv6 in healthcare.  IPv6 adoption hasn’t been very widespread, thanks in part to the large number of embedded devices that have basic connectivity.  Basic in this case means “connected with an IPv4 address”.  But that address can lead to some complications if you aren’t careful.

In a hospital environment, the units that handle medicine dosing are connected to the network.  This allows the staff to program them to properly dispense medications to patients.  Given an IP address in a room, staff can ensure that a patient is getting just the right amount of painkillers and not an overdose.  Ensuring a device gets the same IP each time is critical to making this process work.  According to George, he has recommended that the staff stop using DHCP to automatically assign addresses and instead move to static IP configuration to ensure there isn’t a situation where a patient inadvertently receives a fatal megadose of medication, such as when an adult med unit is accidentally used in a pediatric application.

This static policy does lead to network complications.  Units removed from their proper location are rendered unusable because of the wrong IP.  Worse yet, since those units don’t check in with the central system any more, they could conceivably be incorrectly configured.  At best this will generate a support call to the IT staff.  At worst…well, think lawsuit.  Not to mention what happens if there is a major change to gateway information.  That would necessitate massive manual reconfiguration and downtime until those units can be fixed.

Cut Me Some SLAAC

This is where IPv6 comes into play, especially with Stateless Address Auto Configuration (SLAAC).  By using an automatically configured address structure that never changes, this equipment will never go offline.  It will always be checked in on the network.  There will be little chance of the unit dispensing the wrong amount of medication.  The medical unit will have history available via the same IPv6 address.

There are challenges to be sure.  IPv6 support isn’t cheap or easy.  In the medical industry, innovation happens at a snail’s pace.  These devices are just now starting to have mobile connectivity for wireless use.  Asking the manufacturers to add IPv6 into their networking stacks is going to take years of development at best.

Having the equipment attached all the time also brings up issues with moving the unit to the wrong area and potentially creating a fatal situation.  Thankfully, the router advertisements can help there.  If the RA for a given subnet locks the unit into a given prefix, controls can be enacted on the central system to ensure that devices in that prefix range will never be allowed to dispense medication above or below a certain amount.  While this is more of a configuration on the medical unit side, IPv6 provides the predictability needed to ensure those devices can be found and cataloged.  Since a SLAAC addressed device using EUI-64 will always get the same address, you never have to guess which device got a specific address.  You will always know from the last 64 bits which device you are speaking to, no matter the prefix.

Tom’s Take

Healthcare is a very static industry when it comes to innovation.  Medical companies are trying to keep pace with technology advances while at the same time ensuring that devices are safe and do not threaten the patients they are supposed to protect.  IPv6 can give us an extra measure of safety by ensure devices receive the same address every time.  IPv6 also gives the consistency needed to compile proper reporting about the operation of a device and even the capability of finding that device when it is moved to an improper location.  Thanks to SLAAC and IPv6, one day these networking technologies might just save your life.

Moscone Madness

moscone1

The Moscone Center in San Francisco is a popular place for technical events.  Apple’s World Wide Developer Conference (WWDC) is an annual user of the space.  Cisco Live and VMworld also come back every few years to keep the location lively.  This year, both conferences utilized Moscone to showcase tech advances and foster community discussion.  Having attended both this year in San Francisco, I think I can finally state the following with certainty.


It’s time for tech conferences to stop using the Moscone Center.


Let’s face it.  If your conference has more than 10,000 attendees, you have outgrown Moscone.  WWDC works in Moscone because they cap the number of attendees at 5,000.  VMworld 2014 has 22,000 attendees.  Cisco Live 2014 had well over 20,000 as well.  Cramming four times the number of delegates into a cramped Moscone Center does not foster the kind of environment you want at your flagship conference.

The main keynote hall in Moscone North is too small to hold the large number of audience members.  In an age where every keynote address is streamed live, that shouldn’t be a problem.  Except that people still want to be involved and close to the event.  At both Cisco Live and VMworld, the keynote room filled up quickly and staff were directing the overflow to community spaces that were already packed too full.  Being stuffed into a crowded room with no seating or table space is frustrating.  But those are just the challenges of Moscone.  There are others as well.

I Left My Wallet In San Francisco

San Francisco isn’t cheap.  It is one of the most expensive places in the country to live.  By holding your conference in downtown San Francisco, you are forcing your 20,000+ attendees into a crowded metropolitan area with expensive hotels.  Every time I looked up a hotel room in the vicinity of VMworld or Cisco Live, I was unable to find anything for less than $300 per night.  Contrast that with Interop or Cisco Live in Las Vegas, where sub-$100 are available and $200 per night gets you into the hotel of the conference center.

Las Vegas is built for conferences.  It has adequate inexpensive hotel options.  It is designed to handle a large number of travelers arriving at once.  While spread out geographically, it is easy to navigate.  In fact, except for the lack of Uber, Las Vegas is easy to get around in than San Francisco.  I never have a problem finding a restaurant in Vegas to take a large party.  Bringing a group of 5 or 6 to a restaurant in San Francisco all but guarantees you won’t find a seat for hours.

The only real reason I can see for holding conferences at Moscone, aside from historical value, is the ease of getting materials and people into San Francisco.  Cisco and VMware both are in Silicon Valley.  Driving up to San Francisco is much easier than shipping the conference equipment to Las Vegas or Orlando.  But ease-of-transport does not make it easy on your attendees.  Add in the fact that the lower cost of setup is not reflected in additional services or reduced hotel rates and you can imagine that attendees have no real incentive to come to Moscone.


Tom’s Take

The Moscone Center is like the Cotton Bowl in Dallas.  While both have a history of producing wonderful events, both have passed their prime.  They are ill-suited for modern events.  They are cramped and crowded.  They are in unfavorable areas.  It is quickly becoming more difficult to hold events for these reasons.  But unlike the Cotton Bowl, which has almost 100 years of history, Moscone offers not real reason to stay.  Apple will always be here.  Every new iPhone, Mac, and iPad will be launched here.  But those 5,000 attendees are comfortable in one section of Moscone.  Subjecting your VMworld and Cisco Live users to these kinds of conditions is unacceptable.

It’s time for Cisco, VMware, and other large organizations to move away from Moscone.  It’s time to recognize that Moscone is not big enough for an event that tries to stuff in every user it can.  instead, conferences should be located where it makes sense.  Las Vegas, San Diego, and Orlando are conference towns.  Let’s use them as they were meant to be used.  Let’s stop the madness of trying to shoehorn 20,000 important attendees into the sardine can of the Moscone Center.