Positioning Policy Properly

Who owns the network policy for your organization? How about the security policy?Identity policy? Sound like easy questions, don’t they? The first two are pretty standard. The last generally comes down to one or two different teams depending upon how much Active Directory you have deployed. But have you ever really thought about why?

During Future:NET this week, those poll questions were asked to an audience of advanced networking community members. The answers pretty much fell in line with what I was expecting to see. But then I started to wonder about the reasons behind those decisions. And I realized that in a world full of cloud and DevOps/SecOps/OpsOps people, we need to get away from teams owning policy and have policy owned by a separate team.

Specters of the Past

Where does the networking policy live? Most people will jump right in with a list of networking gear. Port profiles live on switches. Routing tables live on routers. Networking policy is executed in hardware. Even if the policy is programmed somewhere else.

What about security policy? Firewalls are probably the first thing that come to mind. More advanced organizations have a ton of software that scans for security issues. Those policy decisions are dictated by teams that understand the way their tools work. You don’t want someone that doesn’t know how traffic flows through a firewall to be trying to manage that device, right?

Let’s consider the identity question. For a multitude of years the identity policy has been owned by the Active Directory (AD) admins. Because identity was always closely tied to the server directory system. Novell (now NetIQ) eDirectory and Microsoft AD were the kings of the hill when it came to identity. Today’s world has so much distributed identity that it’s been handed to the security teams to help manage. AD doesn’t control the VPN concentrator the cloud-enabled email services all the time. There are identity products specifically designed to aggregate all this information and manage it.

But let’s take a step back and ask that important question: why? Why is it that the ownership of a policy must be by a hardware team? Why must the implementors of policy be the owners? The answer is generally that they are the best arbiters of how to implement those policies. The network teams know how to translate applications in to ports. Security teams know how to create firewall rules to implement connection needs. But are they really the best people to do this?

Look at modern policy tools designed to “simplify” networking. I’ll use Cisco ACI as an example but VMware NSX certainly qualifies as well. At a very high level, these tools take into account the needs of applications to create connectivity between software and hardware. You create a policy that allows a database to talk to a front-end server, for example. That policy knows what connections need to happen to get through the firewall and also how to handle redundancy to other members of the cluster. The policy is then implemented automatically in the network by ACI or NSX and magically no one needs to touch anything. The hardware just works because policy automatically does the heavy lifting.

So let’s step back for moment and discuss this. Why does the networking team need to operate ACI or NSX? Sure, it’s because those devices still touch hardware at some point like switches or routers. But we’ve abstracted the need for anyone to actually connect to a single box or a series of boxes and type in lines of configuration that implement the policy. Why does it need to be owned by that team? You might say something about troubleshooting. That’s a common argument that whoever needs to fix it when it breaks is who needs to be the gatekeeper implementing it. But why? Is a network engineer really going to SSH into every switch and correct a bad application tag? Or is that same engineer just going to log into a web console and fix the tag once and propagate that info across the domain?

Ownership of policy isn’t about troubleshooting. It’s about territory. The tug-of-war to classify a device when it needs to be configured is all about collecting and consolidating power in an organization. If I’m the gatekeeper of implementing workloads then you need to pay tribute to me in order to make things happen.

If you don’t believe that, ask yourself this: If there was a Routing team and and Switching team in an organization, who would own the routed SVI interface on a layer 3 switch? The switching team has rights because it’s on their box. The routing team should own it because it’s a layer 3 construct. Both are going to claim it. And both are going to fight over it. And those are teams that do essentially the same job. When you start pulling in the security team or the storage team or the virtualization team you can see how this spirals out of control.

Vision of the Future

Let’s change the argument. Instead of assigning policy to the proper hardware team, let’s create a team of people focused on policy. Let’s make sure we have proper representation from every hardware stack: Networking, Security, Storage, and Virtualization. Everyone brings their expertise to the team for the purpose of making policy interactions better.

Now, when someone needs to roll out a new application, the policy team owns that decision tree. The Policy Team can have a meeting about which hardware is affected. Maybe we need to touch the firewall, two routers, a switch, and perhaps a SAN somewhere along the way. The team can coordinate the policy changes and propose an implementation plan. If there is a construct like ACI or NSX to automate that deployment then that’s the end of it. The policy is implemented and everything is good. Perhaps some older hardware exists that needs manual configuration of the policy. The Policy Team then contacts the hardware owner to implement the policy needs on those devices. But the Policy Team still owns that policy decision. The hardware team is just working to fulfill a request.

Extend the metaphor past hardware now. Who owns the AWS network when your workloads move to the cloud? Is it still the networking team? They’re the best team to own the network, right? Except there are no switches or routers. They’re all software as far as the instance is concerned. Does that mean your cloud team is now your networking team as well? Moving to the cloud muddies the waters immensely.

Let’s step back into the discussion about the Policy Team. Because they own the policy decisions, they also own that policy when it changes hardware or location. If those workloads for email or productivity suite move from on-prem to the cloud then the policy team moves right along with them. Maybe they add an public cloud person to the team to help them interface with AWS but they still own everything. That way, there is no argument about who owns what.

The other beautiful thing about this Policy Team concept is that it also allows the rest of the hardware to behave as a utility in your environment. Because the teams that operate networking or security or storage are just fulfilling requests from the policy team they don’t need to worry about anything other than making their hardware work. They don’t need to get bogged down in policy arguments and territorial disputes. They work on their stuff and everyone stays happy!


Tom’s Take

I know it’s a bit of stretch to think about pulling all of the policy decisions out of the hardware teams and into a separate team. But as we start automating and streamlining the processes we use to implement application policy the need for it to be owned by a particular hardware team is hard to justify. Cutting down on cross-faction warfare over who gets to be the one to manage the new application policy means enhanced productivity and reduced tension in the workplace. And that can only lead to happy users in the long run. And that’s a policy worth implementing.

VMware and VeloCloud: A Hedge Against Hyperconvergence?

VMware announced on Thursday that they are buying VeloCloud. This was a big move in the market that immediately set off a huge discussion about the implications. I had originally thought AT&T would buy VeloCloud based on their relationship in the past, but the acquistion of Vyatta from Brocade over the summer should have been a hint that wasn’t going to happen. Instead, VMware swooped in and picked up the company for an undisclosed amount.

The conversations have been going wild so far. Everyone wants to know how this is going to affect the relationship with Cisco, especially given that Cisco put money into VeloCloud in both 2016 and 2017. Given the acquisition of Viptela by Cisco earlier this year it’s easy to see that these two companies might find themselves competing for marketshare in the SD-WAN space. However, I think that this is actually a different play from VMware. One that’s striking back at hyperconverged vendors.

Adding The Value

If you look at the marketing coming out of hyperconvergence vendors right now, you’ll see there’s a lot of discussion around platform. Fast storage, small footprints, and the ability to deploy anywhere. Hyperconverged solutions are also starting to focus on the hot new trends in compute, like containers. Along the way this means that traditional workloads that run on VMware ESX hypervisors aren’t getting the spotlight they once did.

In fact, the leading hyperconvergence vendor Nutanix has been aggressively selling their own hypervisor, Acropolis as a competitor to VMware. They tout new features and easy configuration as the major reason to use Acropolis over ESX. The push by Nutanix is to get their customers off of ESX and on to Acropolis to get a share of the VMware budget that companies are currently paying.

For VMware, it’s a tough sell to keep their customers on ESX. There’s a very big ecosystem of software out there that runs on ESX, but if you can replicate a large portion of it natively like Acropolis and other hypervisors do there’s not much of a reason to stick with ESX. And if the VMware solution is more expensive over time you will find yourself choosing the cheaper alternative when the negotiations come up for renewal.

For VMware NSX, it’s an even harder road. Most of the organizations that I’ve seen deploying hyperconverged solutions are not huge enterprises with massive centralized data centers. Instead, they are the kind small-to-medium businesses that need some functions but are very budget conscious. They’re also very geographically diverse, with smaller branch offices taking the place of a few massive headquarters locations. While NSX has some advantages for these companies, it’s not the best fit for them. NSX works optimally in a data center with high-speed links and a well-built underlay network.

vWAN with VeloCloud

So how is VeloCloud going to play into this? VeloCloud already has a lot of advantages that made them a great complement to VMware’s model. They have built-in multi tenancy. Their service delivery is virtualized. They were already looking to move toward service providers as their primary market, but network services and managed service providers. This sounds like their interests are aligning quite well with VMware already.

The key advantage for VMware with VeloCloud is how it will allow NSX to extend into the branch. Remember how I said that NSX loves an environment with a stable underlay? That’s what VeloCloud can deliver. A stable, encrypted VPN underlay. An underlay that can be managed from one central location, or in the future, perhaps even a vCenter plugin. That gives VeloCloud a huge advantage to build the underlay to get connectivity between branches.

Now, with an underlay built out, NSX can be pushed down into the branch. Branches can now use all the great features of NSX like analytics, some of which will be bolstered by VeloCloud, as well as microsegmentation and other heretofore unseen features in the branch. The large headquarters data center is now available in a smaller remote size for branches. That’s a huge advantage for organizations that need those features in places that don’t have data centers.

And the pitch against using other hypervisors with your hyperconverged solution? NSX works best with ESX. Now, you can argue that there is real value in keeping ESX on your remote branches is not costs or features that you may one day hope to use if your WAN connection gets upgraded to ludicrous speed. Instead, VeloCloud can be deployed between your HQ or main office and your remote site to bring those NSX functions down into your environment over a secure tunnel.

While this does compete a bit with Cisco from a delivery standpoint, it still doesn’t affect them with complete overlap. In this scenario, VeloCloud is a service delivery platform for NSX and not a piece of hardware at the edge. Absent VeloCloud, this kind of setup could still be replicated with a Cisco Viptela box running the underlay and NSX riding on top in the overlay. But I think that the market that VMware is going after is going to be building this from the ground up with VMware solutions from the start.


Tom’s Take

Not every issues is “Us vs. Them”. I get that VMware and Cisco seem to be spending more time moving closer together on the networking side of things. SD-WAN is a technology that was inevitably going to bring Cisco into conflict with someone. The third generation of SD-WAN vendors are really companies that didn’t have a proper offering buying up all the first generation startups. Viptela and VeloCloud are now off the market and they’ll soon be integral parts of their respective parent’s strategies going forward. Whether VeloCloud is focused on enabling cloud connectivity for VMware or retaking the branch from the hyperconverged vendors is going to play out in the next few months. But instead of focusing on conflict with anyone else, VeloCloud should be judged by the value it brings to VMware in the near term.

Nutanix and Plexxi – An Affinity to Converge

nutanix-logo

Nutanix has been lighting the hyperconverged world on fire as of late. Strong sales led to a big IPO for their stock. They are in a lot of conversations about using their solution in place of large traditional virtualization offerings that include things like blade servers or big boxes. And even coming off the recent Nutanix .NEXT conference there were some big announcements in the networking arena to help them complete their total solution. However, I think Nutanix is missing a big opportunity that’s right in front of them.

I think it’s time for Nutanix to buy Plexxi.

Software Says

If you look at the Nutanix announcements around networking from .NEXT, they look very familiar to anyone in the server space. The highlights include service chaining, microsegmentation, and monitoring all accessible through an API. If this sounds an awful lot like VMware NSX, Cisco ACI, or any one of a number of new networking companies then you are in the right mode of thinking as far as Nutanix is concerned.

SDN in the server space is all about overlay networking. Segmentation of flows and service chaining are the reason why security is so hard to do in the networking space today. Trying to get traffic to behave in a certain way drives networking professionals nuts. Monitoring all of that to ensure that you’re actually doing what you say you’re doing just adds complexity. And the API is the way to do all of that without having to walk down to the data center to console into a switch and learn a new non-Linux CLI command set.

SDN vendors like VMware and Cisco ACI would naturally have jumped onto these complaints and difficulties in the networking world and both have offered solutions for them with their products. For Nutanix to have bundled solutions like this into their networking offering is no accident. They are looking to battle VMware head-to-head and need to offer the kind of feature parity that it’s going to take a make medium to large shops shift their focus away from the VMware ecosystem and take a long look at what Nutanix is offering.

In a way, Nutanix and VMware are starting to reinforce the idea that the network isn’t a magical realm of protocols and tricks that make applications work. Instead, it’s a simple transport layer between locations. For instance, Amazon doesn’t rely on the magic of the interstate system to get your packages from the distribution center to your home. Instead, the interstate system is just a transport layer for their shipping overlays – UPS, FedEX, and so on. The overlay is where the real magic is happening.

Nutanix doesn’t care what your network looks like. They can do almost everything on top of it with their overlay protocols. That would seem to suggest that the focus going forward should be to marginalize or outright ignore the lower layers of the network in favor of something that Nutanix has visibility into and can offer control and monitoring of. That’s where the Plexxi play comes into focus.

Plexxi Logo

Affinity for Awesome

Plexxi has long been a company in search of a way to sell what they do best. When I first saw them years ago, they were touting their Affinities idea as a way to build fast pathways between endpoints to provide better performance for applications that naturally talked to each other. This was a great idea back then. But it quickly got overshadowed by the other SDN solutions out there. It even caused Plexxi to go down a slightly different path for a while looking at other options to compete in a market that they didn’t really have a perfect fit product.

But the Affinities idea is perfect for hyperconverged solutions. Companies like Nutanix are marking their solutions as the way to create application-focused compute nodes on-site without the need to mess with the cloud. It’s a scalable solution that will eventually lead to having multiple nodes in the future as your needs expand. Hyperconverged was designed to be consumable per compute unit as opposed to massively scaling out in leaps and bounds.

Plexxi Affinities is just the tip of the iceberg. Plexxi’s networking connectivity also gives Nutanix the ability to build out a high-speed interconnect network with one advantage – noninterference. I’m speaking about what happens when a customer needs to add more networking ports to support this architecture. They need to make a call to their Networking Vendor of Choice. In the case of Cisco, HPE, or others, that call will often involve a conversation about what they’re doing with the new network followed by a sales pitch for their hyperconverged solution or a partner solution that benefits both companies. Nutanix has a reputation for being the disruptor in traditional IT. The more they can keep their traditional competitors out of the conversation, the more likely they are to keep the business into the future.


Tom’s Take

Plexxi is very much a company with an interesting solution in need of a friend. They aren’t big enough to really partner with hyperconverged solutions, and most of the hyperconverged market at this point is either cozy with someone else or not looking to make big purchases. Nutanix has the rebel mentality. They move fast and strike quickly to get their deals done. They don’t take prisoners. They look to make a splash and get people talking. The best way to keep that up is to bundle a real non-software networking component alongside a solution that will make the application owners happy and keep the conversation focused on a single source. That’s how Cisco did it back and the day and how VMware has climbed to the top of the virtualization market.

If Nutanix were to spend some of that nice IPO money on a Plexxi Christmas present, I think 2017 would be the year that Nutanix stops being discussed in hushed whispers and becomes a real force to be reckoned with up and down the stack.

Will Dell Networking Wither Away?

chopping-block-Dell-EMC

The behemoth merger of Dell and EMC is nearing conclusion. The first week of August is the target date for the final wrap up of all the financial and legal parts of the acquisition. After that is done, the long task of analyzing product lines and finding a way to reduce complexity and product sprawl begins. We’ve already seen the spin out of Quest and Sonicwall into a separate entity to raise cash for the final stretch of the acquisition. No doubt other storage and compute products are going to face a go/no go decision in the future. But one product line which is in real danger of disappearing is networking.

Whither Whitebox?

The first indicator of the problems with Dell and networking comes from whitebox switching. Dell released OS 10 earlier this year as a way to capitalize on the growing market of free operating systems running on commodity hardware. Right now, OS 10 can run on Dell equipment. In the future, they are hoping to spread it out to whitebox devices. That assumes that soon you’ll see Dell branded OSes running on switches purchased from non-Dell sources booting with ONIE.

Once OS 10 pushes forward, what does that mean for Dell’s hardware business? Dell would naturally want to keep selling devices to customers. Whitebox switches would undercut their ability to offer cheap ports to customers in data center deployments. Rather than give up that opportunity, Dell is positioning themselves to run some form of Dell software on top of that hardware for management purposes, which has always been a strong point for Dell. Losing the hardware means little to Dell if they have to lose profit margin to keep it there in the first place.

The second indicator of networking issues comes from comments from Michael Dell at EMCworld this year. Check out this short video featuring him with outgoing EMC CEO Joe Tucci:

Some of the telling comments in here involve Michael Dell’s praise for the NSX business model and how it is being adopted by a large number of other vendors in the industry. Also telling is their reaffirmation that Cisco is an important partnership in VCE and won’t be going away any time soon. While these two things don’t seem to be related on the surface, they both point to a truth Dell is trying hard to accept.

In the future, with overlay network virtualization models gaining traction in the data center, the underlying hardware will matter little. In almost every case, the hardware choice will come down to one of two options:

  1. Which switch is the cheapest?
  2. Which switch is on the Approved List?

That’s it. That’s the whole decision tree. No one will care what sticker is on the box. They will only care that it didn’t cost a fortune and that they won’t get fired for buying it. That’s bad for companies that aren’t making white boxes or named Cisco. Other network vendors are going to try and add value in some way, but the overlay sitting on top of those bells and whistles will make it next to impossible to differentiate in anything but software. Whether that’s superior management capabilities, open plug-in model, or some other thing we haven’t thought of will make no difference in the end. Software will still be king and the hardware will be an inexpensive pawn or a costly piece that has been pre-approved.

Whither Wireless?

The other big inflection point that makes me worry about the Dell networking story is the lack of movement in the wireless space. Dell has historically been a company to partner first and acquire second. But with HPE’s acquisition of Aruba Networks last year, the dominos in the wireless space are still waiting to fall. Brocade raced out to buy Ruckus. Meru offered itself on a platter to anyone that would buy them. Now Aerohive stands as the last independent wireless vendor without a dance partner. Yes, they’ve announced that they are partnering with Dell, but have you been to the Dell Wireless Networking page? Can you guess what the Dell W-series is? Here’s a hint: it rhymes with “Peruba”.

Every time Dell leads with a W-series deployment, they are effectively paying their biggest competitor. They are opening the door to allowing HPE/Aruba to come in and not only start talking about wireless but servers, storage, and other networking as well. Dell would do well at this point to start deemphasizing the W-series and start highlighting the “new generation” of Aerohive APs and how they are going to the be the focus moving forward.

The real solution would be for Dell to buy a wireless company and take all the wireless expertise they are selling in-house. That would show they are serious about both the campus network of the future and the data center network needed to support their other server and storage infrastructure. Sadly, with Dell being leveraged due to the privatization of his company just two years ago and mounting debt for this mega merger, Dell is looking to make cash with spin offs instead of spending it on yet another company to ingest and subsume. Which means a real non-partner wireless solution is still many years away.


Tom’s Take

Dell’s networking strategy is in maintenance mode. Make switches to support faster speeds for now, probably with Tomahawk support soon, and hope that this whole networking thing goes software sooner rather than later. Otherwise, the need to shore up the campus wireless areas along with the coming decision about showing support fully behind NSX and partnerships is going to be a bitter pill to swallow. Perhaps Dell Networking will exist as an option for companies wanting a 100% Dell solution? Or maybe they are waiting for a new offering from Dell/EMC in the data center to drive profits to research and development to keep pace with Cisco and Arista? One can only hope that their networking flower doesn’t wither on the vine.

Disruption in the New World of Networking

This is the one of the most exciting times to be working in networking. New technologies and fresh takes on existing problems are keeping everyone on their toes when it comes to learning new protocols and integration systems. VMworld 2013 served both as an annoucement of VMware’s formal entry into the larger networking world as well as putting existing network vendors on notice. What follows is my take on some of these announcements. I’m sure that some aren’t going to like what I say. I’m even more sure a few will debate my points vehemently. All I ask is that you consider my position as we go forward.

Captain Over, Captain Under

VMware, through their Nicira acquisition and development, is now *the* vendor to go to when you want to build an overlay network. Their technology augments existing deployments to provide software features such as load balancing and policy deployment. In order to do this and ensure that these features are utilized, VMware uses VxLAN tunnels between the devices. VMware calls these constructs “virtual wires”. I’m going to call them vWires, since they’ll likely be called that soon anyway. vWires are deployed between hosts to provide a pathway for communications. Think of it like a GRE tunnel or a VPN tunnel between the hosts. This means the traffic rides on the existing physical network but that network has no real visibility into the payload of the transit packets.

Nicira’s brainchild, NSX, has the ability to function as a layer 2 switch and a layer 3 router as well as a load balancer and a firewall. VMware is integrating many existing technologies with NSX to provide consistency when provisioning and deploying a new sofware-based network. For those devices that can’t be virtualized, VMware is working with HP, Brocade, and Arista to provide NSX agents that can decapsulate the traffic and send it to an physical endpoint that can’t participate in NSX (yet). As of the launch during the keynote, most major networking vendors are participating with NSX. There’s one major exception, but I’ll get to that in a minute.

NSX is a good product. VMware wouldn’t have released it otherwise. It is the vSwitch we’ve needed for a very long time. It also extends the ability of the virtualization/server admin to provision resources quickly. That’s where I’m having my issue with the messaging around NSX. During the second day keynote, the CTOs on stage said that the biggest impediment to application deployment is waiting on the network to be configured. Note that is my paraphrasing of what I took their intent to be. In order to work around the lag in network provisioning, VMware has decided to build a VxLAN/GRE/STT tunnel between the endpoints and eliminate the network admin as a source of delay. NSX turns your network in a fabric for the endpoints connected to it.

Under the Bridge

I also have some issues with NSX and the way it’s supposed to work on existing networks. Network engineers have spent countless hours optimizing paths and reducing delay and jitter to provide applications and servers with the best possible network. Now, that all doesn’t matter. vAdmins just have to click a couple of times and build their vWire to the other server and all that work on the network is for naught. The underlay network exists to provide VxLAN transport. NSX assumes that everything working beneath is running optimally. No loops, no blocked links. NSX doesn’t even participate in spanning tree. Why should it? After all, that vWire ensures that all the traffic ends up in the right location, right? People would never bridge the networking cards on a host server. Like building a VPN server, for instance. All of the things that network admins and engineers think about in regards to keeping the network from blowing up due to excess traffic are handwaved away in the presentations I’ve seen.

The reference architecture for NSX looks pretty. Prettier than any real network I’ve ever seen. I’m afraid that suboptimal networks are going to impact application and server performance now more than ever. And instead of the network using mechanisms like QoS to battle issues, those packets are now invisible bulk traffic. When network folks have no visibility into the content of the network, they can’t help when performance suffers. Who do you think is going to get blamed when that goes on? Right now, it’s the network’s fault when things don’t run right. Do you think that moving the onus for server network provisioning to NSX and vCenter is going to forgive the network people when things go south? Or are the underlay engineers going to be take the brunt of the yelling because they are the only ones that still understand the black magic outside the GUI drag-and-drop to create vWires?

NSX is for service enablement. It allows people to build network components without knowing the CLI. It also means that network admins are going to have to work twice as hard to build resilient networks that work at high speed. I’m hoping that means that TRILL-based fabrics are going to take off. Why use spanning tree now? Your application and service network sure isn’t. No sense adding any more bells and whistles to your switches. It’s better to just tie them into spine-and-leaf CLOS fabrics and be done with it. It now becomes much more important to concentrate on the user experience. Or maybe the wirless network. As long as at least one link exists between your ESX box and the edge switch let the new software networking guys worry about it.

The Recumbent Incumbent?

Cisco is the only major networking manufacturer not publicly on board with NSX right now. Their CTO Padma Warrior has released a response to NSX that talks about lock-in and vertical integration. Still others have released responses to that response. There’s a lot of talk right now about the war brewing between Cisco and VMware and what that means for VCE. One thing is for sure – the landscape has changed. I’m not sure how this is going to fall out on both sides. Cisco isn’t likely to stop selling switches any time soon. NSX still works just fine with Cisco as an underlay. VCE is still going to make a whole bunch of money selling vBlocks in the next few months. Where this becomes a friction point is in the future.

Cisco has been building APIs into their software for the last year. They want to be able to use those APIs to directly program the network through devices like the forthcoming OpenDaylight controller. Will they allow NSX to program them as well? I’m sure they would – if VMware wrote those instructions into NSX. Will VMware demand that Cisco use the NSX-approved APIs and agents to expose network functionality to their software network? They could. Will Cisco scrap OnePK to implement NSX? I doubt that very much. We’re left with a standoff. Cisco wants VMware to use their tools to program Cisco networks. VMware wants Cisco to use the same tools as everyone else and make the network a commodity compared to the way it is now.

Let’s think about that last part for a moment. Aside from some speed differences, networks are largely going to be identical to NSX. It won’t care if you’re running HP, Brocade, or Cisco. Transport is transport. Someone down the road may build some proprietary features into their hardware to make NSX run better but that day is far off. What if a manufacturer builds a switch that is twice as fast as the nearest competition? Three times? Ten times? At what point does the underlay become so important that the overlay starts preferring it exclusively?


Tom’s Take

I said a lot during the Tuesday keynote at VMworld. Some of it was rather snarky. I asked about full BGP tables and vMotioning the machines onto the new NSX network. I asked because I tend to obsess over details. Forgotten details have broken more of my networks than grand design disasters. We tend to fuss over the big things. We make more out of someone that can drive a golf ball hundreds of yards than we do about the one that can consistently sink a ten foot putt. I know that a lot of folks were pre-briefed on NSX. I wasn’t, so I’m playing catch up right now. I need to see it work in production to understand what value it brings to me. One thing is for sure – VMware needs to change the messaging around NSX to be less antagonistic towards network folks. Bring us into your solution. Let us use our years of experience to help rather than making us seem like pariahs responsible for all your application woes. Let us help you help everyone.